00:00:00.000 Started by upstream project "autotest-per-patch" build number 132685 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:06.587 The recommended git tool is: git 00:00:06.588 using credential 00000000-0000-0000-0000-000000000002 00:00:06.589 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:06.602 Fetching changes from the remote Git repository 00:00:06.607 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:06.621 Using shallow fetch with depth 1 00:00:06.621 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:06.621 > git --version # timeout=10 00:00:06.636 > git --version # 'git version 2.39.2' 00:00:06.636 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:06.648 Setting http proxy: proxy-dmz.intel.com:911 00:00:06.648 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:11.737 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:11.751 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:11.764 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:11.764 > git config core.sparsecheckout # timeout=10 00:00:11.778 > git read-tree -mu HEAD # timeout=10 00:00:11.796 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:11.823 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:11.823 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:11.910 [Pipeline] Start of Pipeline 00:00:11.923 [Pipeline] library 00:00:11.925 Loading library shm_lib@master 00:00:11.925 Library shm_lib@master is cached. Copying from home. 00:00:11.942 [Pipeline] node 00:00:11.950 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:11.951 [Pipeline] { 00:00:11.962 [Pipeline] catchError 00:00:11.964 [Pipeline] { 00:00:11.973 [Pipeline] wrap 00:00:11.980 [Pipeline] { 00:00:11.986 [Pipeline] stage 00:00:11.987 [Pipeline] { (Prologue) 00:00:12.169 [Pipeline] sh 00:00:12.458 + logger -p user.info -t JENKINS-CI 00:00:12.480 [Pipeline] echo 00:00:12.481 Node: CYP9 00:00:12.486 [Pipeline] sh 00:00:12.783 [Pipeline] setCustomBuildProperty 00:00:12.793 [Pipeline] echo 00:00:12.794 Cleanup processes 00:00:12.798 [Pipeline] sh 00:00:13.082 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.082 1031888 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.098 [Pipeline] sh 00:00:13.386 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:13.386 ++ grep -v 'sudo pgrep' 00:00:13.386 ++ awk '{print $1}' 00:00:13.386 + sudo kill -9 00:00:13.386 + true 00:00:13.399 [Pipeline] cleanWs 00:00:13.407 [WS-CLEANUP] Deleting project workspace... 00:00:13.407 [WS-CLEANUP] Deferred wipeout is used... 00:00:13.414 [WS-CLEANUP] done 00:00:13.417 [Pipeline] setCustomBuildProperty 00:00:13.427 [Pipeline] sh 00:00:13.709 + sudo git config --global --replace-all safe.directory '*' 00:00:13.793 [Pipeline] httpRequest 00:00:14.110 [Pipeline] echo 00:00:14.111 Sorcerer 10.211.164.20 is alive 00:00:14.119 [Pipeline] retry 00:00:14.121 [Pipeline] { 00:00:14.133 [Pipeline] httpRequest 00:00:14.163 HttpMethod: GET 00:00:14.169 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.173 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.174 Response Code: HTTP/1.1 200 OK 00:00:14.174 Success: Status code 200 is in the accepted range: 200,404 00:00:14.174 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.443 [Pipeline] } 00:00:21.461 [Pipeline] // retry 00:00:21.469 [Pipeline] sh 00:00:21.799 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:21.816 [Pipeline] httpRequest 00:00:22.182 [Pipeline] echo 00:00:22.183 Sorcerer 10.211.164.20 is alive 00:00:22.190 [Pipeline] retry 00:00:22.191 [Pipeline] { 00:00:22.200 [Pipeline] httpRequest 00:00:22.204 HttpMethod: GET 00:00:22.205 URL: http://10.211.164.20/packages/spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:00:22.205 Sending request to url: http://10.211.164.20/packages/spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:00:22.213 Response Code: HTTP/1.1 200 OK 00:00:22.213 Success: Status code 200 is in the accepted range: 200,404 00:00:22.213 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:06:12.767 [Pipeline] } 00:06:12.785 [Pipeline] // retry 00:06:12.793 [Pipeline] sh 00:06:13.083 + tar --no-same-owner -xf spdk_688351e0e466013d77a34b38f3b9742b031c2130.tar.gz 00:06:16.400 [Pipeline] sh 00:06:16.684 + git -C spdk log --oneline -n5 00:06:16.684 688351e0e test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:06:16.684 2826724c4 test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:06:16.684 94ae61614 test/nvmf: Prepare replacements for the network setup 00:06:16.684 8d3947977 spdk_dd: simplify `io_uring_peek_cqe` return code processing 00:06:16.684 77ee034c7 bdev/nvme: Add lock to unprotected operations around attach controller 00:06:16.695 [Pipeline] } 00:06:16.706 [Pipeline] // stage 00:06:16.712 [Pipeline] stage 00:06:16.714 [Pipeline] { (Prepare) 00:06:16.730 [Pipeline] writeFile 00:06:16.745 [Pipeline] sh 00:06:17.032 + logger -p user.info -t JENKINS-CI 00:06:17.046 [Pipeline] sh 00:06:17.331 + logger -p user.info -t JENKINS-CI 00:06:17.345 [Pipeline] sh 00:06:17.632 + cat autorun-spdk.conf 00:06:17.632 SPDK_RUN_FUNCTIONAL_TEST=1 00:06:17.632 SPDK_TEST_NVMF=1 00:06:17.632 SPDK_TEST_NVME_CLI=1 00:06:17.632 SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:17.632 SPDK_TEST_NVMF_NICS=e810 00:06:17.632 SPDK_TEST_VFIOUSER=1 00:06:17.632 SPDK_RUN_UBSAN=1 00:06:17.633 NET_TYPE=phy 00:06:17.641 RUN_NIGHTLY=0 00:06:17.645 [Pipeline] readFile 00:06:17.669 [Pipeline] withEnv 00:06:17.671 [Pipeline] { 00:06:17.683 [Pipeline] sh 00:06:17.972 + set -ex 00:06:17.972 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:06:17.972 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:17.972 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:17.972 ++ SPDK_TEST_NVMF=1 00:06:17.972 ++ SPDK_TEST_NVME_CLI=1 00:06:17.972 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:17.972 ++ SPDK_TEST_NVMF_NICS=e810 00:06:17.972 ++ SPDK_TEST_VFIOUSER=1 00:06:17.972 ++ SPDK_RUN_UBSAN=1 00:06:17.972 ++ NET_TYPE=phy 00:06:17.972 ++ RUN_NIGHTLY=0 00:06:17.972 + case $SPDK_TEST_NVMF_NICS in 00:06:17.972 + DRIVERS=ice 00:06:17.972 + [[ tcp == \r\d\m\a ]] 00:06:17.972 + [[ -n ice ]] 00:06:17.972 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:06:17.972 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:06:17.972 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:06:17.972 rmmod: ERROR: Module irdma is not currently loaded 00:06:17.972 rmmod: ERROR: Module i40iw is not currently loaded 00:06:17.972 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:06:17.972 + true 00:06:17.972 + for D in $DRIVERS 00:06:17.972 + sudo modprobe ice 00:06:17.972 + exit 0 00:06:17.983 [Pipeline] } 00:06:17.997 [Pipeline] // withEnv 00:06:18.003 [Pipeline] } 00:06:18.017 [Pipeline] // stage 00:06:18.026 [Pipeline] catchError 00:06:18.028 [Pipeline] { 00:06:18.060 [Pipeline] timeout 00:06:18.060 Timeout set to expire in 1 hr 0 min 00:06:18.062 [Pipeline] { 00:06:18.077 [Pipeline] stage 00:06:18.079 [Pipeline] { (Tests) 00:06:18.093 [Pipeline] sh 00:06:18.412 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:18.412 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:18.412 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:18.412 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:06:18.412 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:18.412 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:18.412 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:06:18.412 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:18.412 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:06:18.412 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:06:18.412 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:06:18.412 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:06:18.412 + source /etc/os-release 00:06:18.412 ++ NAME='Fedora Linux' 00:06:18.412 ++ VERSION='39 (Cloud Edition)' 00:06:18.412 ++ ID=fedora 00:06:18.412 ++ VERSION_ID=39 00:06:18.412 ++ VERSION_CODENAME= 00:06:18.412 ++ PLATFORM_ID=platform:f39 00:06:18.412 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:06:18.412 ++ ANSI_COLOR='0;38;2;60;110;180' 00:06:18.412 ++ LOGO=fedora-logo-icon 00:06:18.412 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:06:18.412 ++ HOME_URL=https://fedoraproject.org/ 00:06:18.412 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:06:18.412 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:06:18.412 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:06:18.412 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:06:18.412 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:06:18.412 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:06:18.412 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:06:18.412 ++ SUPPORT_END=2024-11-12 00:06:18.412 ++ VARIANT='Cloud Edition' 00:06:18.412 ++ VARIANT_ID=cloud 00:06:18.412 + uname -a 00:06:18.412 Linux spdk-cyp-09 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:06:18.412 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:21.712 Hugepages 00:06:21.712 node hugesize free / total 00:06:21.712 node0 1048576kB 0 / 0 00:06:21.712 node0 2048kB 0 / 0 00:06:21.712 node1 1048576kB 0 / 0 00:06:21.712 node1 2048kB 0 / 0 00:06:21.712 00:06:21.712 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:21.712 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:06:21.712 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:06:21.713 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:06:21.713 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:06:21.713 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:06:21.713 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:06:21.713 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:06:21.713 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:06:21.713 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:06:21.713 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:06:21.713 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:06:21.713 + rm -f /tmp/spdk-ld-path 00:06:21.713 + source autorun-spdk.conf 00:06:21.713 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:21.713 ++ SPDK_TEST_NVMF=1 00:06:21.713 ++ SPDK_TEST_NVME_CLI=1 00:06:21.713 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:21.713 ++ SPDK_TEST_NVMF_NICS=e810 00:06:21.713 ++ SPDK_TEST_VFIOUSER=1 00:06:21.713 ++ SPDK_RUN_UBSAN=1 00:06:21.713 ++ NET_TYPE=phy 00:06:21.713 ++ RUN_NIGHTLY=0 00:06:21.713 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:06:21.713 + [[ -n '' ]] 00:06:21.713 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:21.713 + for M in /var/spdk/build-*-manifest.txt 00:06:21.713 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:06:21.713 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:21.713 + for M in /var/spdk/build-*-manifest.txt 00:06:21.713 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:06:21.713 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:21.713 + for M in /var/spdk/build-*-manifest.txt 00:06:21.713 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:06:21.713 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:06:21.713 ++ uname 00:06:21.713 + [[ Linux == \L\i\n\u\x ]] 00:06:21.713 + sudo dmesg -T 00:06:21.713 + sudo dmesg --clear 00:06:21.713 + dmesg_pid=1034030 00:06:21.713 + [[ Fedora Linux == FreeBSD ]] 00:06:21.713 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.713 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:21.713 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:06:21.713 + [[ -x /usr/src/fio-static/fio ]] 00:06:21.713 + sudo dmesg -Tw 00:06:21.713 + export FIO_BIN=/usr/src/fio-static/fio 00:06:21.713 + FIO_BIN=/usr/src/fio-static/fio 00:06:21.713 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:06:21.713 + [[ ! -v VFIO_QEMU_BIN ]] 00:06:21.713 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:06:21.713 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.713 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:21.713 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:06:21.713 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.713 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:21.713 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:21.713 11:49:46 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:21.713 11:49:46 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:06:21.713 11:49:46 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:06:21.713 11:49:46 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:06:21.713 11:49:46 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:21.975 11:49:46 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:06:21.975 11:49:46 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:21.975 11:49:46 -- scripts/common.sh@15 -- $ shopt -s extglob 00:06:21.975 11:49:46 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:06:21.975 11:49:46 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:21.975 11:49:46 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:21.975 11:49:46 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.975 11:49:46 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.975 11:49:46 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.975 11:49:46 -- paths/export.sh@5 -- $ export PATH 00:06:21.975 11:49:46 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:21.975 11:49:46 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:21.975 11:49:46 -- common/autobuild_common.sh@493 -- $ date +%s 00:06:21.975 11:49:46 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1733395786.XXXXXX 00:06:21.975 11:49:46 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1733395786.JYv0Or 00:06:21.975 11:49:46 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:06:21.975 11:49:46 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:06:21.975 11:49:46 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:06:21.975 11:49:46 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:06:21.975 11:49:46 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:06:21.975 11:49:46 -- common/autobuild_common.sh@509 -- $ get_config_params 00:06:21.975 11:49:46 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:06:21.975 11:49:46 -- common/autotest_common.sh@10 -- $ set +x 00:06:21.975 11:49:46 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:06:21.975 11:49:46 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:06:21.975 11:49:46 -- pm/common@17 -- $ local monitor 00:06:21.975 11:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.975 11:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.975 11:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.975 11:49:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:21.975 11:49:46 -- pm/common@21 -- $ date +%s 00:06:21.975 11:49:46 -- pm/common@25 -- $ sleep 1 00:06:21.975 11:49:46 -- pm/common@21 -- $ date +%s 00:06:21.975 11:49:46 -- pm/common@21 -- $ date +%s 00:06:21.975 11:49:46 -- pm/common@21 -- $ date +%s 00:06:21.975 11:49:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395786 00:06:21.975 11:49:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395786 00:06:21.975 11:49:46 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395786 00:06:21.975 11:49:46 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1733395786 00:06:21.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395786_collect-cpu-load.pm.log 00:06:21.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395786_collect-vmstat.pm.log 00:06:21.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395786_collect-cpu-temp.pm.log 00:06:21.975 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1733395786_collect-bmc-pm.bmc.pm.log 00:06:22.919 11:49:47 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:06:22.919 11:49:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:06:22.919 11:49:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:06:22.919 11:49:47 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:22.919 11:49:47 -- spdk/autobuild.sh@16 -- $ date -u 00:06:22.919 Thu Dec 5 10:49:47 AM UTC 2024 00:06:22.919 11:49:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:06:22.919 v25.01-pre-299-g688351e0e 00:06:22.919 11:49:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:06:22.919 11:49:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:06:22.919 11:49:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:06:22.919 11:49:47 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:22.919 11:49:47 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:22.919 11:49:47 -- common/autotest_common.sh@10 -- $ set +x 00:06:22.919 ************************************ 00:06:22.919 START TEST ubsan 00:06:22.919 ************************************ 00:06:22.919 11:49:47 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:06:22.919 using ubsan 00:06:22.919 00:06:22.919 real 0m0.001s 00:06:22.919 user 0m0.000s 00:06:22.919 sys 0m0.000s 00:06:22.919 11:49:47 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:06:22.919 11:49:47 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:06:22.919 ************************************ 00:06:22.919 END TEST ubsan 00:06:22.919 ************************************ 00:06:22.919 11:49:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:06:22.919 11:49:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:06:22.919 11:49:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:06:22.919 11:49:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:06:22.919 11:49:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:06:22.919 11:49:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:06:22.919 11:49:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:06:22.919 11:49:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:06:22.919 11:49:47 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:06:23.179 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:23.179 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:23.751 Using 'verbs' RDMA provider 00:06:39.236 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:06:51.490 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:06:52.337 Creating mk/config.mk...done. 00:06:52.337 Creating mk/cc.flags.mk...done. 00:06:52.337 Type 'make' to build. 00:06:52.337 11:50:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j144 00:06:52.337 11:50:17 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:06:52.337 11:50:17 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:06:52.337 11:50:17 -- common/autotest_common.sh@10 -- $ set +x 00:06:52.337 ************************************ 00:06:52.337 START TEST make 00:06:52.337 ************************************ 00:06:52.337 11:50:17 make -- common/autotest_common.sh@1129 -- $ make -j144 00:06:52.598 make[1]: Nothing to be done for 'all'. 00:06:54.516 The Meson build system 00:06:54.516 Version: 1.5.0 00:06:54.516 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:06:54.516 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:54.516 Build type: native build 00:06:54.516 Project name: libvfio-user 00:06:54.516 Project version: 0.0.1 00:06:54.516 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:06:54.516 C linker for the host machine: cc ld.bfd 2.40-14 00:06:54.516 Host machine cpu family: x86_64 00:06:54.516 Host machine cpu: x86_64 00:06:54.516 Run-time dependency threads found: YES 00:06:54.516 Library dl found: YES 00:06:54.516 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:06:54.517 Run-time dependency json-c found: YES 0.17 00:06:54.517 Run-time dependency cmocka found: YES 1.1.7 00:06:54.517 Program pytest-3 found: NO 00:06:54.517 Program flake8 found: NO 00:06:54.517 Program misspell-fixer found: NO 00:06:54.517 Program restructuredtext-lint found: NO 00:06:54.517 Program valgrind found: YES (/usr/bin/valgrind) 00:06:54.517 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:06:54.517 Compiler for C supports arguments -Wmissing-declarations: YES 00:06:54.517 Compiler for C supports arguments -Wwrite-strings: YES 00:06:54.517 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:54.517 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:06:54.517 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:06:54.517 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:06:54.517 Build targets in project: 8 00:06:54.517 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:06:54.517 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:06:54.517 00:06:54.517 libvfio-user 0.0.1 00:06:54.517 00:06:54.517 User defined options 00:06:54.517 buildtype : debug 00:06:54.517 default_library: shared 00:06:54.517 libdir : /usr/local/lib 00:06:54.517 00:06:54.517 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:06:54.517 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:54.777 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:06:54.777 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:06:54.777 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:06:54.777 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:06:54.777 [5/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:06:54.777 [6/37] Compiling C object samples/null.p/null.c.o 00:06:54.777 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:06:54.777 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:06:54.777 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:06:54.777 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:06:54.777 [11/37] Compiling C object test/unit_tests.p/mocks.c.o 00:06:54.777 [12/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:06:54.777 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:06:54.777 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:06:54.777 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:06:54.777 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:06:54.777 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:06:54.777 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:06:54.777 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:06:54.777 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:06:54.777 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:06:54.777 [22/37] Compiling C object samples/server.p/server.c.o 00:06:54.777 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:06:54.777 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:06:54.777 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:06:54.777 [26/37] Compiling C object samples/client.p/client.c.o 00:06:54.777 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:06:54.777 [28/37] Linking target samples/client 00:06:54.777 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:06:54.777 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:06:55.038 [31/37] Linking target test/unit_tests 00:06:55.038 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:06:55.038 [33/37] Linking target samples/shadow_ioeventfd_server 00:06:55.038 [34/37] Linking target samples/server 00:06:55.038 [35/37] Linking target samples/gpio-pci-idio-16 00:06:55.038 [36/37] Linking target samples/lspci 00:06:55.038 [37/37] Linking target samples/null 00:06:55.038 INFO: autodetecting backend as ninja 00:06:55.038 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:55.038 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:06:55.610 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:06:55.610 ninja: no work to do. 00:07:00.901 The Meson build system 00:07:00.901 Version: 1.5.0 00:07:00.901 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:07:00.901 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:07:00.901 Build type: native build 00:07:00.901 Program cat found: YES (/usr/bin/cat) 00:07:00.901 Project name: DPDK 00:07:00.901 Project version: 24.03.0 00:07:00.901 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:07:00.901 C linker for the host machine: cc ld.bfd 2.40-14 00:07:00.901 Host machine cpu family: x86_64 00:07:00.901 Host machine cpu: x86_64 00:07:00.901 Message: ## Building in Developer Mode ## 00:07:00.901 Program pkg-config found: YES (/usr/bin/pkg-config) 00:07:00.901 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:07:00.901 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:07:00.901 Program python3 found: YES (/usr/bin/python3) 00:07:00.901 Program cat found: YES (/usr/bin/cat) 00:07:00.901 Compiler for C supports arguments -march=native: YES 00:07:00.901 Checking for size of "void *" : 8 00:07:00.901 Checking for size of "void *" : 8 (cached) 00:07:00.901 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:07:00.901 Library m found: YES 00:07:00.901 Library numa found: YES 00:07:00.901 Has header "numaif.h" : YES 00:07:00.901 Library fdt found: NO 00:07:00.901 Library execinfo found: NO 00:07:00.901 Has header "execinfo.h" : YES 00:07:00.901 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:07:00.901 Run-time dependency libarchive found: NO (tried pkgconfig) 00:07:00.901 Run-time dependency libbsd found: NO (tried pkgconfig) 00:07:00.901 Run-time dependency jansson found: NO (tried pkgconfig) 00:07:00.901 Run-time dependency openssl found: YES 3.1.1 00:07:00.901 Run-time dependency libpcap found: YES 1.10.4 00:07:00.901 Has header "pcap.h" with dependency libpcap: YES 00:07:00.901 Compiler for C supports arguments -Wcast-qual: YES 00:07:00.901 Compiler for C supports arguments -Wdeprecated: YES 00:07:00.901 Compiler for C supports arguments -Wformat: YES 00:07:00.901 Compiler for C supports arguments -Wformat-nonliteral: NO 00:07:00.901 Compiler for C supports arguments -Wformat-security: NO 00:07:00.901 Compiler for C supports arguments -Wmissing-declarations: YES 00:07:00.901 Compiler for C supports arguments -Wmissing-prototypes: YES 00:07:00.901 Compiler for C supports arguments -Wnested-externs: YES 00:07:00.901 Compiler for C supports arguments -Wold-style-definition: YES 00:07:00.901 Compiler for C supports arguments -Wpointer-arith: YES 00:07:00.901 Compiler for C supports arguments -Wsign-compare: YES 00:07:00.902 Compiler for C supports arguments -Wstrict-prototypes: YES 00:07:00.902 Compiler for C supports arguments -Wundef: YES 00:07:00.902 Compiler for C supports arguments -Wwrite-strings: YES 00:07:00.902 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:07:00.902 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:07:00.902 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:07:00.902 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:07:00.902 Program objdump found: YES (/usr/bin/objdump) 00:07:00.902 Compiler for C supports arguments -mavx512f: YES 00:07:00.902 Checking if "AVX512 checking" compiles: YES 00:07:00.902 Fetching value of define "__SSE4_2__" : 1 00:07:00.902 Fetching value of define "__AES__" : 1 00:07:00.902 Fetching value of define "__AVX__" : 1 00:07:00.902 Fetching value of define "__AVX2__" : 1 00:07:00.902 Fetching value of define "__AVX512BW__" : 1 00:07:00.902 Fetching value of define "__AVX512CD__" : 1 00:07:00.902 Fetching value of define "__AVX512DQ__" : 1 00:07:00.902 Fetching value of define "__AVX512F__" : 1 00:07:00.902 Fetching value of define "__AVX512VL__" : 1 00:07:00.902 Fetching value of define "__PCLMUL__" : 1 00:07:00.902 Fetching value of define "__RDRND__" : 1 00:07:00.902 Fetching value of define "__RDSEED__" : 1 00:07:00.902 Fetching value of define "__VPCLMULQDQ__" : 1 00:07:00.902 Fetching value of define "__znver1__" : (undefined) 00:07:00.902 Fetching value of define "__znver2__" : (undefined) 00:07:00.902 Fetching value of define "__znver3__" : (undefined) 00:07:00.902 Fetching value of define "__znver4__" : (undefined) 00:07:00.902 Compiler for C supports arguments -Wno-format-truncation: YES 00:07:00.902 Message: lib/log: Defining dependency "log" 00:07:00.902 Message: lib/kvargs: Defining dependency "kvargs" 00:07:00.902 Message: lib/telemetry: Defining dependency "telemetry" 00:07:00.902 Checking for function "getentropy" : NO 00:07:00.902 Message: lib/eal: Defining dependency "eal" 00:07:00.902 Message: lib/ring: Defining dependency "ring" 00:07:00.902 Message: lib/rcu: Defining dependency "rcu" 00:07:00.902 Message: lib/mempool: Defining dependency "mempool" 00:07:00.902 Message: lib/mbuf: Defining dependency "mbuf" 00:07:00.902 Fetching value of define "__PCLMUL__" : 1 (cached) 00:07:00.902 Fetching value of define "__AVX512F__" : 1 (cached) 00:07:00.902 Fetching value of define "__AVX512BW__" : 1 (cached) 00:07:00.902 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:07:00.902 Fetching value of define "__AVX512VL__" : 1 (cached) 00:07:00.902 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:07:00.902 Compiler for C supports arguments -mpclmul: YES 00:07:00.902 Compiler for C supports arguments -maes: YES 00:07:00.902 Compiler for C supports arguments -mavx512f: YES (cached) 00:07:00.902 Compiler for C supports arguments -mavx512bw: YES 00:07:00.902 Compiler for C supports arguments -mavx512dq: YES 00:07:00.902 Compiler for C supports arguments -mavx512vl: YES 00:07:00.902 Compiler for C supports arguments -mvpclmulqdq: YES 00:07:00.902 Compiler for C supports arguments -mavx2: YES 00:07:00.902 Compiler for C supports arguments -mavx: YES 00:07:00.902 Message: lib/net: Defining dependency "net" 00:07:00.902 Message: lib/meter: Defining dependency "meter" 00:07:00.902 Message: lib/ethdev: Defining dependency "ethdev" 00:07:00.902 Message: lib/pci: Defining dependency "pci" 00:07:00.902 Message: lib/cmdline: Defining dependency "cmdline" 00:07:00.902 Message: lib/hash: Defining dependency "hash" 00:07:00.902 Message: lib/timer: Defining dependency "timer" 00:07:00.902 Message: lib/compressdev: Defining dependency "compressdev" 00:07:00.902 Message: lib/cryptodev: Defining dependency "cryptodev" 00:07:00.902 Message: lib/dmadev: Defining dependency "dmadev" 00:07:00.902 Compiler for C supports arguments -Wno-cast-qual: YES 00:07:00.902 Message: lib/power: Defining dependency "power" 00:07:00.902 Message: lib/reorder: Defining dependency "reorder" 00:07:00.902 Message: lib/security: Defining dependency "security" 00:07:00.902 Has header "linux/userfaultfd.h" : YES 00:07:00.902 Has header "linux/vduse.h" : YES 00:07:00.902 Message: lib/vhost: Defining dependency "vhost" 00:07:00.902 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:07:00.902 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:07:00.902 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:07:00.902 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:07:00.902 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:07:00.902 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:07:00.902 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:07:00.902 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:07:00.902 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:07:00.902 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:07:00.902 Program doxygen found: YES (/usr/local/bin/doxygen) 00:07:00.902 Configuring doxy-api-html.conf using configuration 00:07:00.902 Configuring doxy-api-man.conf using configuration 00:07:00.902 Program mandb found: YES (/usr/bin/mandb) 00:07:00.902 Program sphinx-build found: NO 00:07:00.902 Configuring rte_build_config.h using configuration 00:07:00.902 Message: 00:07:00.902 ================= 00:07:00.902 Applications Enabled 00:07:00.902 ================= 00:07:00.902 00:07:00.902 apps: 00:07:00.902 00:07:00.902 00:07:00.902 Message: 00:07:00.902 ================= 00:07:00.902 Libraries Enabled 00:07:00.902 ================= 00:07:00.902 00:07:00.902 libs: 00:07:00.902 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:07:00.902 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:07:00.902 cryptodev, dmadev, power, reorder, security, vhost, 00:07:00.902 00:07:00.902 Message: 00:07:00.902 =============== 00:07:00.902 Drivers Enabled 00:07:00.902 =============== 00:07:00.902 00:07:00.902 common: 00:07:00.902 00:07:00.902 bus: 00:07:00.902 pci, vdev, 00:07:00.902 mempool: 00:07:00.902 ring, 00:07:00.902 dma: 00:07:00.902 00:07:00.902 net: 00:07:00.902 00:07:00.902 crypto: 00:07:00.902 00:07:00.902 compress: 00:07:00.902 00:07:00.902 vdpa: 00:07:00.902 00:07:00.902 00:07:00.902 Message: 00:07:00.902 ================= 00:07:00.902 Content Skipped 00:07:00.902 ================= 00:07:00.902 00:07:00.902 apps: 00:07:00.902 dumpcap: explicitly disabled via build config 00:07:00.902 graph: explicitly disabled via build config 00:07:00.902 pdump: explicitly disabled via build config 00:07:00.902 proc-info: explicitly disabled via build config 00:07:00.902 test-acl: explicitly disabled via build config 00:07:00.902 test-bbdev: explicitly disabled via build config 00:07:00.902 test-cmdline: explicitly disabled via build config 00:07:00.902 test-compress-perf: explicitly disabled via build config 00:07:00.902 test-crypto-perf: explicitly disabled via build config 00:07:00.902 test-dma-perf: explicitly disabled via build config 00:07:00.902 test-eventdev: explicitly disabled via build config 00:07:00.902 test-fib: explicitly disabled via build config 00:07:00.902 test-flow-perf: explicitly disabled via build config 00:07:00.902 test-gpudev: explicitly disabled via build config 00:07:00.902 test-mldev: explicitly disabled via build config 00:07:00.902 test-pipeline: explicitly disabled via build config 00:07:00.902 test-pmd: explicitly disabled via build config 00:07:00.902 test-regex: explicitly disabled via build config 00:07:00.902 test-sad: explicitly disabled via build config 00:07:00.902 test-security-perf: explicitly disabled via build config 00:07:00.902 00:07:00.902 libs: 00:07:00.902 argparse: explicitly disabled via build config 00:07:00.902 metrics: explicitly disabled via build config 00:07:00.902 acl: explicitly disabled via build config 00:07:00.902 bbdev: explicitly disabled via build config 00:07:00.902 bitratestats: explicitly disabled via build config 00:07:00.902 bpf: explicitly disabled via build config 00:07:00.902 cfgfile: explicitly disabled via build config 00:07:00.902 distributor: explicitly disabled via build config 00:07:00.902 efd: explicitly disabled via build config 00:07:00.902 eventdev: explicitly disabled via build config 00:07:00.902 dispatcher: explicitly disabled via build config 00:07:00.902 gpudev: explicitly disabled via build config 00:07:00.902 gro: explicitly disabled via build config 00:07:00.902 gso: explicitly disabled via build config 00:07:00.902 ip_frag: explicitly disabled via build config 00:07:00.902 jobstats: explicitly disabled via build config 00:07:00.902 latencystats: explicitly disabled via build config 00:07:00.902 lpm: explicitly disabled via build config 00:07:00.902 member: explicitly disabled via build config 00:07:00.902 pcapng: explicitly disabled via build config 00:07:00.902 rawdev: explicitly disabled via build config 00:07:00.902 regexdev: explicitly disabled via build config 00:07:00.903 mldev: explicitly disabled via build config 00:07:00.903 rib: explicitly disabled via build config 00:07:00.903 sched: explicitly disabled via build config 00:07:00.903 stack: explicitly disabled via build config 00:07:00.903 ipsec: explicitly disabled via build config 00:07:00.903 pdcp: explicitly disabled via build config 00:07:00.903 fib: explicitly disabled via build config 00:07:00.903 port: explicitly disabled via build config 00:07:00.903 pdump: explicitly disabled via build config 00:07:00.903 table: explicitly disabled via build config 00:07:00.903 pipeline: explicitly disabled via build config 00:07:00.903 graph: explicitly disabled via build config 00:07:00.903 node: explicitly disabled via build config 00:07:00.903 00:07:00.903 drivers: 00:07:00.903 common/cpt: not in enabled drivers build config 00:07:00.903 common/dpaax: not in enabled drivers build config 00:07:00.903 common/iavf: not in enabled drivers build config 00:07:00.903 common/idpf: not in enabled drivers build config 00:07:00.903 common/ionic: not in enabled drivers build config 00:07:00.903 common/mvep: not in enabled drivers build config 00:07:00.903 common/octeontx: not in enabled drivers build config 00:07:00.903 bus/auxiliary: not in enabled drivers build config 00:07:00.903 bus/cdx: not in enabled drivers build config 00:07:00.903 bus/dpaa: not in enabled drivers build config 00:07:00.903 bus/fslmc: not in enabled drivers build config 00:07:00.903 bus/ifpga: not in enabled drivers build config 00:07:00.903 bus/platform: not in enabled drivers build config 00:07:00.903 bus/uacce: not in enabled drivers build config 00:07:00.903 bus/vmbus: not in enabled drivers build config 00:07:00.903 common/cnxk: not in enabled drivers build config 00:07:00.903 common/mlx5: not in enabled drivers build config 00:07:00.903 common/nfp: not in enabled drivers build config 00:07:00.903 common/nitrox: not in enabled drivers build config 00:07:00.903 common/qat: not in enabled drivers build config 00:07:00.903 common/sfc_efx: not in enabled drivers build config 00:07:00.903 mempool/bucket: not in enabled drivers build config 00:07:00.903 mempool/cnxk: not in enabled drivers build config 00:07:00.903 mempool/dpaa: not in enabled drivers build config 00:07:00.903 mempool/dpaa2: not in enabled drivers build config 00:07:00.903 mempool/octeontx: not in enabled drivers build config 00:07:00.903 mempool/stack: not in enabled drivers build config 00:07:00.903 dma/cnxk: not in enabled drivers build config 00:07:00.903 dma/dpaa: not in enabled drivers build config 00:07:00.903 dma/dpaa2: not in enabled drivers build config 00:07:00.903 dma/hisilicon: not in enabled drivers build config 00:07:00.903 dma/idxd: not in enabled drivers build config 00:07:00.903 dma/ioat: not in enabled drivers build config 00:07:00.903 dma/skeleton: not in enabled drivers build config 00:07:00.903 net/af_packet: not in enabled drivers build config 00:07:00.903 net/af_xdp: not in enabled drivers build config 00:07:00.903 net/ark: not in enabled drivers build config 00:07:00.903 net/atlantic: not in enabled drivers build config 00:07:00.903 net/avp: not in enabled drivers build config 00:07:00.903 net/axgbe: not in enabled drivers build config 00:07:00.903 net/bnx2x: not in enabled drivers build config 00:07:00.903 net/bnxt: not in enabled drivers build config 00:07:00.903 net/bonding: not in enabled drivers build config 00:07:00.903 net/cnxk: not in enabled drivers build config 00:07:00.903 net/cpfl: not in enabled drivers build config 00:07:00.903 net/cxgbe: not in enabled drivers build config 00:07:00.903 net/dpaa: not in enabled drivers build config 00:07:00.903 net/dpaa2: not in enabled drivers build config 00:07:00.903 net/e1000: not in enabled drivers build config 00:07:00.903 net/ena: not in enabled drivers build config 00:07:00.903 net/enetc: not in enabled drivers build config 00:07:00.903 net/enetfec: not in enabled drivers build config 00:07:00.903 net/enic: not in enabled drivers build config 00:07:00.903 net/failsafe: not in enabled drivers build config 00:07:00.903 net/fm10k: not in enabled drivers build config 00:07:00.903 net/gve: not in enabled drivers build config 00:07:00.903 net/hinic: not in enabled drivers build config 00:07:00.903 net/hns3: not in enabled drivers build config 00:07:00.903 net/i40e: not in enabled drivers build config 00:07:00.903 net/iavf: not in enabled drivers build config 00:07:00.903 net/ice: not in enabled drivers build config 00:07:00.903 net/idpf: not in enabled drivers build config 00:07:00.903 net/igc: not in enabled drivers build config 00:07:00.903 net/ionic: not in enabled drivers build config 00:07:00.903 net/ipn3ke: not in enabled drivers build config 00:07:00.903 net/ixgbe: not in enabled drivers build config 00:07:00.903 net/mana: not in enabled drivers build config 00:07:00.903 net/memif: not in enabled drivers build config 00:07:00.903 net/mlx4: not in enabled drivers build config 00:07:00.903 net/mlx5: not in enabled drivers build config 00:07:00.903 net/mvneta: not in enabled drivers build config 00:07:00.903 net/mvpp2: not in enabled drivers build config 00:07:00.903 net/netvsc: not in enabled drivers build config 00:07:00.903 net/nfb: not in enabled drivers build config 00:07:00.903 net/nfp: not in enabled drivers build config 00:07:00.903 net/ngbe: not in enabled drivers build config 00:07:00.903 net/null: not in enabled drivers build config 00:07:00.903 net/octeontx: not in enabled drivers build config 00:07:00.903 net/octeon_ep: not in enabled drivers build config 00:07:00.903 net/pcap: not in enabled drivers build config 00:07:00.903 net/pfe: not in enabled drivers build config 00:07:00.903 net/qede: not in enabled drivers build config 00:07:00.903 net/ring: not in enabled drivers build config 00:07:00.903 net/sfc: not in enabled drivers build config 00:07:00.903 net/softnic: not in enabled drivers build config 00:07:00.903 net/tap: not in enabled drivers build config 00:07:00.903 net/thunderx: not in enabled drivers build config 00:07:00.903 net/txgbe: not in enabled drivers build config 00:07:00.903 net/vdev_netvsc: not in enabled drivers build config 00:07:00.903 net/vhost: not in enabled drivers build config 00:07:00.903 net/virtio: not in enabled drivers build config 00:07:00.903 net/vmxnet3: not in enabled drivers build config 00:07:00.903 raw/*: missing internal dependency, "rawdev" 00:07:00.903 crypto/armv8: not in enabled drivers build config 00:07:00.903 crypto/bcmfs: not in enabled drivers build config 00:07:00.903 crypto/caam_jr: not in enabled drivers build config 00:07:00.903 crypto/ccp: not in enabled drivers build config 00:07:00.903 crypto/cnxk: not in enabled drivers build config 00:07:00.903 crypto/dpaa_sec: not in enabled drivers build config 00:07:00.903 crypto/dpaa2_sec: not in enabled drivers build config 00:07:00.903 crypto/ipsec_mb: not in enabled drivers build config 00:07:00.903 crypto/mlx5: not in enabled drivers build config 00:07:00.903 crypto/mvsam: not in enabled drivers build config 00:07:00.903 crypto/nitrox: not in enabled drivers build config 00:07:00.903 crypto/null: not in enabled drivers build config 00:07:00.903 crypto/octeontx: not in enabled drivers build config 00:07:00.903 crypto/openssl: not in enabled drivers build config 00:07:00.903 crypto/scheduler: not in enabled drivers build config 00:07:00.903 crypto/uadk: not in enabled drivers build config 00:07:00.903 crypto/virtio: not in enabled drivers build config 00:07:00.903 compress/isal: not in enabled drivers build config 00:07:00.903 compress/mlx5: not in enabled drivers build config 00:07:00.903 compress/nitrox: not in enabled drivers build config 00:07:00.903 compress/octeontx: not in enabled drivers build config 00:07:00.903 compress/zlib: not in enabled drivers build config 00:07:00.903 regex/*: missing internal dependency, "regexdev" 00:07:00.903 ml/*: missing internal dependency, "mldev" 00:07:00.903 vdpa/ifc: not in enabled drivers build config 00:07:00.903 vdpa/mlx5: not in enabled drivers build config 00:07:00.903 vdpa/nfp: not in enabled drivers build config 00:07:00.903 vdpa/sfc: not in enabled drivers build config 00:07:00.903 event/*: missing internal dependency, "eventdev" 00:07:00.903 baseband/*: missing internal dependency, "bbdev" 00:07:00.903 gpu/*: missing internal dependency, "gpudev" 00:07:00.903 00:07:00.903 00:07:01.164 Build targets in project: 84 00:07:01.164 00:07:01.164 DPDK 24.03.0 00:07:01.164 00:07:01.164 User defined options 00:07:01.164 buildtype : debug 00:07:01.164 default_library : shared 00:07:01.164 libdir : lib 00:07:01.164 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:01.164 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:07:01.164 c_link_args : 00:07:01.164 cpu_instruction_set: native 00:07:01.164 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:07:01.164 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:07:01.164 enable_docs : false 00:07:01.164 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:07:01.164 enable_kmods : false 00:07:01.164 max_lcores : 128 00:07:01.164 tests : false 00:07:01.164 00:07:01.164 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:07:01.741 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:07:01.741 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:07:01.741 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:07:01.741 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:07:01.741 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:07:01.741 [5/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:07:01.741 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:07:01.741 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:07:01.741 [8/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:07:01.741 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:07:01.741 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:07:01.741 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:07:01.741 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:07:01.741 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:07:01.741 [14/267] Linking static target lib/librte_kvargs.a 00:07:01.741 [15/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:07:01.741 [16/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:07:01.741 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:07:02.002 [18/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:07:02.002 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:07:02.002 [20/267] Linking static target lib/librte_log.a 00:07:02.002 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:07:02.002 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:07:02.002 [23/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:07:02.002 [24/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:07:02.002 [25/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:07:02.002 [26/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:07:02.002 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:07:02.002 [28/267] Linking static target lib/librte_pci.a 00:07:02.002 [29/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:07:02.002 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:07:02.002 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:07:02.002 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:07:02.002 [33/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:07:02.002 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:07:02.002 [35/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:07:02.002 [36/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:07:02.002 [37/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:07:02.002 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:07:02.260 [39/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:07:02.260 [40/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.260 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:07:02.260 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.260 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:07:02.260 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:07:02.260 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:07:02.260 [46/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:07:02.260 [47/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:07:02.260 [48/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:07:02.260 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:07:02.260 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:07:02.260 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:07:02.260 [52/267] Linking static target lib/librte_ring.a 00:07:02.260 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:07:02.260 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:07:02.260 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:07:02.260 [56/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:07:02.260 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:07:02.260 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:07:02.260 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:07:02.260 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:07:02.260 [61/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:07:02.260 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:07:02.260 [63/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:07:02.261 [64/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:07:02.261 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:07:02.261 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:07:02.261 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:07:02.261 [68/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:07:02.261 [69/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:07:02.261 [70/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:07:02.261 [71/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:07:02.261 [72/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:07:02.261 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:07:02.261 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:07:02.261 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:07:02.261 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:07:02.261 [77/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:07:02.261 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:07:02.261 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:07:02.261 [80/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:07:02.261 [81/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:07:02.261 [82/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:07:02.261 [83/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:07:02.261 [84/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:07:02.261 [85/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:07:02.261 [86/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:07:02.261 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:07:02.261 [88/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:07:02.261 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:07:02.261 [90/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:07:02.261 [91/267] Linking static target lib/librte_meter.a 00:07:02.261 [92/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:07:02.261 [93/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:07:02.261 [94/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:07:02.261 [95/267] Linking static target lib/librte_telemetry.a 00:07:02.520 [96/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:07:02.520 [97/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:07:02.520 [98/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:07:02.520 [99/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:07:02.520 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:07:02.520 [101/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:07:02.520 [102/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:07:02.520 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:07:02.520 [104/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:07:02.520 [105/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:07:02.520 [106/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:07:02.520 [107/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:07:02.520 [108/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:07:02.520 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:07:02.520 [110/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:07:02.520 [111/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:07:02.520 [112/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:07:02.520 [113/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:07:02.520 [114/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:07:02.520 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:07:02.520 [116/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:07:02.520 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:07:02.520 [118/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:07:02.520 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:07:02.520 [120/267] Linking static target lib/librte_timer.a 00:07:02.520 [121/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:07:02.520 [122/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:07:02.520 [123/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:07:02.520 [124/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:07:02.520 [125/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:07:02.520 [126/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:07:02.520 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:07:02.520 [128/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:07:02.520 [129/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:07:02.520 [130/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:07:02.520 [131/267] Linking static target lib/librte_cmdline.a 00:07:02.520 [132/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:07:02.520 [133/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:07:02.520 [134/267] Linking static target lib/librte_net.a 00:07:02.520 [135/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:07:02.520 [136/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:07:02.520 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:07:02.520 [138/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:07:02.520 [139/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:07:02.520 [140/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:07:02.520 [141/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:07:02.520 [142/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:07:02.520 [143/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:07:02.520 [144/267] Linking static target lib/librte_dmadev.a 00:07:02.520 [145/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:07:02.520 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:07:02.520 [147/267] Linking static target lib/librte_compressdev.a 00:07:02.520 [148/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:07:02.520 [149/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:07:02.520 [150/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.520 [151/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:07:02.520 [152/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:07:02.520 [153/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:07:02.520 [154/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:07:02.520 [155/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:07:02.520 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:07:02.520 [157/267] Linking static target lib/librte_power.a 00:07:02.520 [158/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:07:02.520 [159/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:07:02.520 [160/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:07:02.520 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:07:02.520 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:07:02.520 [163/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:07:02.520 [164/267] Linking target lib/librte_log.so.24.1 00:07:02.520 [165/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:07:02.520 [166/267] Linking static target lib/librte_rcu.a 00:07:02.520 [167/267] Linking static target lib/librte_eal.a 00:07:02.520 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:07:02.520 [169/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:07:02.520 [170/267] Linking static target lib/librte_mempool.a 00:07:02.520 [171/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:07:02.520 [172/267] Linking static target lib/librte_reorder.a 00:07:02.520 [173/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:07:02.520 [174/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:07:02.520 [175/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:07:02.520 [176/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:07:02.520 [177/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:07:02.520 [178/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.781 [179/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:02.781 [180/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:07:02.781 [181/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:07:02.781 [182/267] Linking static target lib/librte_security.a 00:07:02.781 [183/267] Linking static target drivers/librte_bus_vdev.a 00:07:02.781 [184/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.781 [185/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:07:02.781 [186/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:07:02.781 [187/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:07:02.781 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:07:02.781 [189/267] Linking static target lib/librte_mbuf.a 00:07:02.781 [190/267] Linking target lib/librte_kvargs.so.24.1 00:07:02.781 [191/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:07:02.781 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:07:02.781 [193/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:02.781 [194/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:07:02.781 [195/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:07:02.781 [196/267] Linking static target drivers/librte_bus_pci.a 00:07:02.781 [197/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:07:02.781 [198/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:07:02.781 [199/267] Linking static target lib/librte_hash.a 00:07:02.781 [200/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:02.781 [201/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:07:02.781 [202/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:07:02.781 [203/267] Linking static target drivers/librte_mempool_ring.a 00:07:02.781 [204/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:07:02.781 [205/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:07:02.781 [206/267] Linking static target lib/librte_cryptodev.a 00:07:02.781 [207/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:07:03.041 [208/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.041 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.041 [210/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.041 [211/267] Linking target lib/librte_telemetry.so.24.1 00:07:03.041 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.041 [213/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.303 [214/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:07:03.303 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.303 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.303 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.303 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:07:03.303 [219/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:07:03.303 [220/267] Linking static target lib/librte_ethdev.a 00:07:03.563 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.563 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.564 [223/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.564 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.823 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:07:03.823 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:07:04.396 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:07:04.396 [228/267] Linking static target lib/librte_vhost.a 00:07:05.335 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:06.718 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:07:13.301 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.243 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:07:14.243 [233/267] Linking target lib/librte_eal.so.24.1 00:07:14.243 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:07:14.243 [235/267] Linking target lib/librte_meter.so.24.1 00:07:14.243 [236/267] Linking target lib/librte_ring.so.24.1 00:07:14.243 [237/267] Linking target lib/librte_pci.so.24.1 00:07:14.243 [238/267] Linking target lib/librte_timer.so.24.1 00:07:14.243 [239/267] Linking target lib/librte_dmadev.so.24.1 00:07:14.243 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:07:14.517 [241/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:07:14.517 [242/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:07:14.517 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:07:14.517 [244/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:07:14.517 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:07:14.517 [246/267] Linking target lib/librte_rcu.so.24.1 00:07:14.517 [247/267] Linking target drivers/librte_bus_pci.so.24.1 00:07:14.517 [248/267] Linking target lib/librte_mempool.so.24.1 00:07:14.517 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:07:14.517 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:07:14.777 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:07:14.777 [252/267] Linking target lib/librte_mbuf.so.24.1 00:07:14.777 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:07:14.777 [254/267] Linking target lib/librte_net.so.24.1 00:07:14.777 [255/267] Linking target lib/librte_compressdev.so.24.1 00:07:14.777 [256/267] Linking target lib/librte_reorder.so.24.1 00:07:14.777 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:07:15.037 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:07:15.037 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:07:15.037 [260/267] Linking target lib/librte_hash.so.24.1 00:07:15.037 [261/267] Linking target lib/librte_cmdline.so.24.1 00:07:15.037 [262/267] Linking target lib/librte_ethdev.so.24.1 00:07:15.037 [263/267] Linking target lib/librte_security.so.24.1 00:07:15.037 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:07:15.297 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:07:15.297 [266/267] Linking target lib/librte_power.so.24.1 00:07:15.297 [267/267] Linking target lib/librte_vhost.so.24.1 00:07:15.297 INFO: autodetecting backend as ninja 00:07:15.297 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:07:18.717 CC lib/log/log.o 00:07:18.717 CC lib/log/log_flags.o 00:07:18.717 CC lib/ut/ut.o 00:07:18.717 CC lib/log/log_deprecated.o 00:07:18.717 CC lib/ut_mock/mock.o 00:07:18.717 LIB libspdk_ut_mock.a 00:07:18.717 LIB libspdk_ut.a 00:07:18.717 LIB libspdk_log.a 00:07:18.717 SO libspdk_ut_mock.so.6.0 00:07:18.717 SO libspdk_log.so.7.1 00:07:18.717 SO libspdk_ut.so.2.0 00:07:18.717 SYMLINK libspdk_ut_mock.so 00:07:18.717 SYMLINK libspdk_ut.so 00:07:18.717 SYMLINK libspdk_log.so 00:07:19.290 CC lib/dma/dma.o 00:07:19.290 CC lib/util/base64.o 00:07:19.290 CC lib/ioat/ioat.o 00:07:19.290 CC lib/util/bit_array.o 00:07:19.290 CXX lib/trace_parser/trace.o 00:07:19.290 CC lib/util/cpuset.o 00:07:19.290 CC lib/util/crc16.o 00:07:19.290 CC lib/util/crc32.o 00:07:19.290 CC lib/util/crc32c.o 00:07:19.290 CC lib/util/crc32_ieee.o 00:07:19.290 CC lib/util/crc64.o 00:07:19.290 CC lib/util/dif.o 00:07:19.290 CC lib/util/fd.o 00:07:19.290 CC lib/util/fd_group.o 00:07:19.290 CC lib/util/file.o 00:07:19.290 CC lib/util/hexlify.o 00:07:19.290 CC lib/util/iov.o 00:07:19.290 CC lib/util/math.o 00:07:19.290 CC lib/util/net.o 00:07:19.290 CC lib/util/pipe.o 00:07:19.290 CC lib/util/strerror_tls.o 00:07:19.290 CC lib/util/string.o 00:07:19.290 CC lib/util/uuid.o 00:07:19.290 CC lib/util/xor.o 00:07:19.290 CC lib/util/zipf.o 00:07:19.290 CC lib/util/md5.o 00:07:19.290 CC lib/vfio_user/host/vfio_user_pci.o 00:07:19.290 CC lib/vfio_user/host/vfio_user.o 00:07:19.290 LIB libspdk_dma.a 00:07:19.290 SO libspdk_dma.so.5.0 00:07:19.551 SYMLINK libspdk_dma.so 00:07:19.551 LIB libspdk_vfio_user.a 00:07:19.551 SO libspdk_vfio_user.so.5.0 00:07:19.812 LIB libspdk_ioat.a 00:07:19.812 SYMLINK libspdk_vfio_user.so 00:07:19.812 LIB libspdk_util.a 00:07:19.812 SO libspdk_ioat.so.7.0 00:07:19.812 SYMLINK libspdk_ioat.so 00:07:19.812 SO libspdk_util.so.10.1 00:07:19.812 SYMLINK libspdk_util.so 00:07:20.073 LIB libspdk_trace_parser.a 00:07:20.073 SO libspdk_trace_parser.so.6.0 00:07:20.073 SYMLINK libspdk_trace_parser.so 00:07:20.342 CC lib/idxd/idxd.o 00:07:20.342 CC lib/idxd/idxd_user.o 00:07:20.342 CC lib/idxd/idxd_kernel.o 00:07:20.342 CC lib/env_dpdk/env.o 00:07:20.342 CC lib/conf/conf.o 00:07:20.342 CC lib/env_dpdk/memory.o 00:07:20.342 CC lib/env_dpdk/pci.o 00:07:20.342 CC lib/env_dpdk/init.o 00:07:20.342 CC lib/json/json_parse.o 00:07:20.342 CC lib/vmd/vmd.o 00:07:20.342 CC lib/env_dpdk/threads.o 00:07:20.342 CC lib/json/json_util.o 00:07:20.342 CC lib/env_dpdk/pci_ioat.o 00:07:20.342 CC lib/vmd/led.o 00:07:20.342 CC lib/env_dpdk/pci_virtio.o 00:07:20.342 CC lib/json/json_write.o 00:07:20.342 CC lib/rdma_utils/rdma_utils.o 00:07:20.342 CC lib/env_dpdk/pci_vmd.o 00:07:20.342 CC lib/env_dpdk/pci_idxd.o 00:07:20.342 CC lib/env_dpdk/pci_event.o 00:07:20.342 CC lib/env_dpdk/sigbus_handler.o 00:07:20.342 CC lib/env_dpdk/pci_dpdk.o 00:07:20.342 CC lib/env_dpdk/pci_dpdk_2207.o 00:07:20.342 CC lib/env_dpdk/pci_dpdk_2211.o 00:07:20.605 LIB libspdk_conf.a 00:07:20.605 SO libspdk_conf.so.6.0 00:07:20.605 LIB libspdk_rdma_utils.a 00:07:20.605 LIB libspdk_json.a 00:07:20.605 SO libspdk_rdma_utils.so.1.0 00:07:20.605 SYMLINK libspdk_conf.so 00:07:20.605 SO libspdk_json.so.6.0 00:07:20.605 SYMLINK libspdk_rdma_utils.so 00:07:20.865 SYMLINK libspdk_json.so 00:07:20.865 LIB libspdk_idxd.a 00:07:20.865 SO libspdk_idxd.so.12.1 00:07:20.865 LIB libspdk_vmd.a 00:07:20.865 SYMLINK libspdk_idxd.so 00:07:20.866 SO libspdk_vmd.so.6.0 00:07:21.126 SYMLINK libspdk_vmd.so 00:07:21.126 CC lib/rdma_provider/common.o 00:07:21.126 CC lib/rdma_provider/rdma_provider_verbs.o 00:07:21.126 CC lib/jsonrpc/jsonrpc_server.o 00:07:21.126 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:07:21.126 CC lib/jsonrpc/jsonrpc_client.o 00:07:21.126 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:07:21.386 LIB libspdk_rdma_provider.a 00:07:21.386 SO libspdk_rdma_provider.so.7.0 00:07:21.386 LIB libspdk_jsonrpc.a 00:07:21.386 SYMLINK libspdk_rdma_provider.so 00:07:21.386 SO libspdk_jsonrpc.so.6.0 00:07:21.386 SYMLINK libspdk_jsonrpc.so 00:07:21.647 LIB libspdk_env_dpdk.a 00:07:21.647 SO libspdk_env_dpdk.so.15.1 00:07:21.647 SYMLINK libspdk_env_dpdk.so 00:07:21.906 CC lib/rpc/rpc.o 00:07:21.906 LIB libspdk_rpc.a 00:07:22.166 SO libspdk_rpc.so.6.0 00:07:22.166 SYMLINK libspdk_rpc.so 00:07:22.461 CC lib/trace/trace.o 00:07:22.461 CC lib/trace/trace_flags.o 00:07:22.461 CC lib/trace/trace_rpc.o 00:07:22.461 CC lib/notify/notify.o 00:07:22.461 CC lib/keyring/keyring.o 00:07:22.461 CC lib/notify/notify_rpc.o 00:07:22.461 CC lib/keyring/keyring_rpc.o 00:07:22.722 LIB libspdk_notify.a 00:07:22.722 SO libspdk_notify.so.6.0 00:07:22.722 LIB libspdk_keyring.a 00:07:22.722 LIB libspdk_trace.a 00:07:22.722 SO libspdk_keyring.so.2.0 00:07:22.722 SYMLINK libspdk_notify.so 00:07:22.722 SO libspdk_trace.so.11.0 00:07:22.722 SYMLINK libspdk_keyring.so 00:07:22.982 SYMLINK libspdk_trace.so 00:07:23.242 CC lib/sock/sock.o 00:07:23.242 CC lib/sock/sock_rpc.o 00:07:23.242 CC lib/thread/thread.o 00:07:23.242 CC lib/thread/iobuf.o 00:07:23.501 LIB libspdk_sock.a 00:07:23.761 SO libspdk_sock.so.10.0 00:07:23.761 SYMLINK libspdk_sock.so 00:07:24.021 CC lib/nvme/nvme_ctrlr_cmd.o 00:07:24.021 CC lib/nvme/nvme_ctrlr.o 00:07:24.021 CC lib/nvme/nvme_fabric.o 00:07:24.021 CC lib/nvme/nvme_ns_cmd.o 00:07:24.021 CC lib/nvme/nvme_ns.o 00:07:24.021 CC lib/nvme/nvme_pcie_common.o 00:07:24.021 CC lib/nvme/nvme_pcie.o 00:07:24.021 CC lib/nvme/nvme_qpair.o 00:07:24.021 CC lib/nvme/nvme.o 00:07:24.021 CC lib/nvme/nvme_quirks.o 00:07:24.021 CC lib/nvme/nvme_transport.o 00:07:24.021 CC lib/nvme/nvme_discovery.o 00:07:24.021 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:07:24.021 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:07:24.021 CC lib/nvme/nvme_tcp.o 00:07:24.021 CC lib/nvme/nvme_opal.o 00:07:24.021 CC lib/nvme/nvme_io_msg.o 00:07:24.021 CC lib/nvme/nvme_poll_group.o 00:07:24.021 CC lib/nvme/nvme_zns.o 00:07:24.021 CC lib/nvme/nvme_stubs.o 00:07:24.021 CC lib/nvme/nvme_auth.o 00:07:24.021 CC lib/nvme/nvme_cuse.o 00:07:24.021 CC lib/nvme/nvme_vfio_user.o 00:07:24.021 CC lib/nvme/nvme_rdma.o 00:07:24.591 LIB libspdk_thread.a 00:07:24.591 SO libspdk_thread.so.11.0 00:07:24.591 SYMLINK libspdk_thread.so 00:07:25.163 CC lib/blob/blobstore.o 00:07:25.163 CC lib/blob/request.o 00:07:25.163 CC lib/blob/zeroes.o 00:07:25.163 CC lib/blob/blob_bs_dev.o 00:07:25.163 CC lib/virtio/virtio.o 00:07:25.163 CC lib/virtio/virtio_vhost_user.o 00:07:25.163 CC lib/virtio/virtio_vfio_user.o 00:07:25.163 CC lib/virtio/virtio_pci.o 00:07:25.163 CC lib/accel/accel.o 00:07:25.163 CC lib/accel/accel_rpc.o 00:07:25.163 CC lib/accel/accel_sw.o 00:07:25.163 CC lib/init/json_config.o 00:07:25.163 CC lib/vfu_tgt/tgt_endpoint.o 00:07:25.163 CC lib/init/subsystem.o 00:07:25.163 CC lib/vfu_tgt/tgt_rpc.o 00:07:25.163 CC lib/init/subsystem_rpc.o 00:07:25.163 CC lib/init/rpc.o 00:07:25.163 CC lib/fsdev/fsdev.o 00:07:25.163 CC lib/fsdev/fsdev_io.o 00:07:25.163 CC lib/fsdev/fsdev_rpc.o 00:07:25.424 LIB libspdk_init.a 00:07:25.424 SO libspdk_init.so.6.0 00:07:25.424 LIB libspdk_vfu_tgt.a 00:07:25.424 LIB libspdk_virtio.a 00:07:25.424 SO libspdk_vfu_tgt.so.3.0 00:07:25.424 SO libspdk_virtio.so.7.0 00:07:25.424 SYMLINK libspdk_init.so 00:07:25.424 SYMLINK libspdk_vfu_tgt.so 00:07:25.424 SYMLINK libspdk_virtio.so 00:07:25.686 LIB libspdk_fsdev.a 00:07:25.686 SO libspdk_fsdev.so.2.0 00:07:25.947 CC lib/event/app.o 00:07:25.947 CC lib/event/reactor.o 00:07:25.947 CC lib/event/log_rpc.o 00:07:25.948 CC lib/event/app_rpc.o 00:07:25.948 CC lib/event/scheduler_static.o 00:07:25.948 SYMLINK libspdk_fsdev.so 00:07:25.948 LIB libspdk_accel.a 00:07:26.210 SO libspdk_accel.so.16.0 00:07:26.210 LIB libspdk_nvme.a 00:07:26.210 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:07:26.210 SYMLINK libspdk_accel.so 00:07:26.210 LIB libspdk_event.a 00:07:26.210 SO libspdk_nvme.so.15.0 00:07:26.210 SO libspdk_event.so.14.0 00:07:26.472 SYMLINK libspdk_event.so 00:07:26.472 SYMLINK libspdk_nvme.so 00:07:26.472 CC lib/bdev/bdev.o 00:07:26.472 CC lib/bdev/bdev_rpc.o 00:07:26.472 CC lib/bdev/bdev_zone.o 00:07:26.472 CC lib/bdev/part.o 00:07:26.472 CC lib/bdev/scsi_nvme.o 00:07:26.734 LIB libspdk_fuse_dispatcher.a 00:07:26.734 SO libspdk_fuse_dispatcher.so.1.0 00:07:26.995 SYMLINK libspdk_fuse_dispatcher.so 00:07:27.567 LIB libspdk_blob.a 00:07:27.567 SO libspdk_blob.so.12.0 00:07:27.829 SYMLINK libspdk_blob.so 00:07:28.091 CC lib/lvol/lvol.o 00:07:28.091 CC lib/blobfs/blobfs.o 00:07:28.091 CC lib/blobfs/tree.o 00:07:29.034 LIB libspdk_blobfs.a 00:07:29.034 SO libspdk_blobfs.so.11.0 00:07:29.034 LIB libspdk_bdev.a 00:07:29.034 LIB libspdk_lvol.a 00:07:29.034 SO libspdk_bdev.so.17.0 00:07:29.034 SYMLINK libspdk_blobfs.so 00:07:29.034 SO libspdk_lvol.so.11.0 00:07:29.034 SYMLINK libspdk_lvol.so 00:07:29.034 SYMLINK libspdk_bdev.so 00:07:29.296 CC lib/scsi/dev.o 00:07:29.296 CC lib/scsi/lun.o 00:07:29.296 CC lib/scsi/port.o 00:07:29.296 CC lib/scsi/scsi.o 00:07:29.296 CC lib/scsi/scsi_bdev.o 00:07:29.296 CC lib/scsi/scsi_pr.o 00:07:29.296 CC lib/scsi/scsi_rpc.o 00:07:29.296 CC lib/scsi/task.o 00:07:29.556 CC lib/nvmf/ctrlr.o 00:07:29.556 CC lib/nvmf/ctrlr_discovery.o 00:07:29.556 CC lib/nbd/nbd.o 00:07:29.556 CC lib/nvmf/ctrlr_bdev.o 00:07:29.556 CC lib/nvmf/subsystem.o 00:07:29.556 CC lib/nbd/nbd_rpc.o 00:07:29.556 CC lib/nvmf/nvmf.o 00:07:29.556 CC lib/ftl/ftl_core.o 00:07:29.556 CC lib/nvmf/nvmf_rpc.o 00:07:29.556 CC lib/ftl/ftl_init.o 00:07:29.556 CC lib/nvmf/transport.o 00:07:29.556 CC lib/ftl/ftl_layout.o 00:07:29.556 CC lib/nvmf/tcp.o 00:07:29.556 CC lib/ftl/ftl_debug.o 00:07:29.556 CC lib/ublk/ublk.o 00:07:29.556 CC lib/nvmf/stubs.o 00:07:29.556 CC lib/ftl/ftl_io.o 00:07:29.556 CC lib/ublk/ublk_rpc.o 00:07:29.556 CC lib/nvmf/mdns_server.o 00:07:29.556 CC lib/ftl/ftl_sb.o 00:07:29.556 CC lib/nvmf/vfio_user.o 00:07:29.556 CC lib/ftl/ftl_l2p.o 00:07:29.556 CC lib/ftl/ftl_l2p_flat.o 00:07:29.556 CC lib/nvmf/rdma.o 00:07:29.556 CC lib/ftl/ftl_nv_cache.o 00:07:29.556 CC lib/nvmf/auth.o 00:07:29.556 CC lib/ftl/ftl_band.o 00:07:29.556 CC lib/ftl/ftl_band_ops.o 00:07:29.556 CC lib/ftl/ftl_writer.o 00:07:29.556 CC lib/ftl/ftl_rq.o 00:07:29.556 CC lib/ftl/ftl_reloc.o 00:07:29.556 CC lib/ftl/ftl_l2p_cache.o 00:07:29.556 CC lib/ftl/ftl_p2l.o 00:07:29.556 CC lib/ftl/ftl_p2l_log.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_startup.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_md.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_misc.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_band.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:07:29.556 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:07:29.556 CC lib/ftl/utils/ftl_conf.o 00:07:29.556 CC lib/ftl/utils/ftl_md.o 00:07:29.556 CC lib/ftl/utils/ftl_mempool.o 00:07:29.556 CC lib/ftl/utils/ftl_bitmap.o 00:07:29.556 CC lib/ftl/utils/ftl_property.o 00:07:29.556 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:07:29.556 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:07:29.556 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:07:29.556 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:07:29.556 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:07:29.556 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:07:29.556 CC lib/ftl/upgrade/ftl_sb_v3.o 00:07:29.556 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:07:29.556 CC lib/ftl/upgrade/ftl_sb_v5.o 00:07:29.556 CC lib/ftl/nvc/ftl_nvc_dev.o 00:07:29.556 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:07:29.556 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:07:29.556 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:07:29.556 CC lib/ftl/base/ftl_base_dev.o 00:07:29.556 CC lib/ftl/ftl_trace.o 00:07:29.556 CC lib/ftl/base/ftl_base_bdev.o 00:07:30.129 LIB libspdk_nbd.a 00:07:30.129 LIB libspdk_scsi.a 00:07:30.129 SO libspdk_nbd.so.7.0 00:07:30.129 SO libspdk_scsi.so.9.0 00:07:30.129 SYMLINK libspdk_nbd.so 00:07:30.129 SYMLINK libspdk_scsi.so 00:07:30.129 LIB libspdk_ublk.a 00:07:30.129 SO libspdk_ublk.so.3.0 00:07:30.390 SYMLINK libspdk_ublk.so 00:07:30.390 LIB libspdk_ftl.a 00:07:30.390 CC lib/iscsi/conn.o 00:07:30.390 CC lib/iscsi/init_grp.o 00:07:30.390 CC lib/iscsi/iscsi.o 00:07:30.390 CC lib/iscsi/param.o 00:07:30.390 CC lib/iscsi/tgt_node.o 00:07:30.390 CC lib/iscsi/portal_grp.o 00:07:30.390 CC lib/iscsi/iscsi_subsystem.o 00:07:30.390 CC lib/iscsi/iscsi_rpc.o 00:07:30.390 CC lib/iscsi/task.o 00:07:30.390 CC lib/vhost/vhost.o 00:07:30.390 CC lib/vhost/vhost_rpc.o 00:07:30.390 CC lib/vhost/vhost_scsi.o 00:07:30.390 CC lib/vhost/vhost_blk.o 00:07:30.390 CC lib/vhost/rte_vhost_user.o 00:07:30.650 SO libspdk_ftl.so.9.0 00:07:30.910 SYMLINK libspdk_ftl.so 00:07:31.482 LIB libspdk_nvmf.a 00:07:31.483 SO libspdk_nvmf.so.20.0 00:07:31.483 LIB libspdk_vhost.a 00:07:31.483 SO libspdk_vhost.so.8.0 00:07:31.744 SYMLINK libspdk_vhost.so 00:07:31.744 SYMLINK libspdk_nvmf.so 00:07:31.744 LIB libspdk_iscsi.a 00:07:31.744 SO libspdk_iscsi.so.8.0 00:07:32.006 SYMLINK libspdk_iscsi.so 00:07:32.579 CC module/env_dpdk/env_dpdk_rpc.o 00:07:32.579 CC module/vfu_device/vfu_virtio.o 00:07:32.579 CC module/vfu_device/vfu_virtio_blk.o 00:07:32.579 CC module/vfu_device/vfu_virtio_scsi.o 00:07:32.579 CC module/vfu_device/vfu_virtio_rpc.o 00:07:32.579 CC module/vfu_device/vfu_virtio_fs.o 00:07:32.579 CC module/keyring/file/keyring.o 00:07:32.579 LIB libspdk_env_dpdk_rpc.a 00:07:32.579 CC module/keyring/file/keyring_rpc.o 00:07:32.579 CC module/sock/posix/posix.o 00:07:32.579 CC module/accel/iaa/accel_iaa.o 00:07:32.579 CC module/accel/iaa/accel_iaa_rpc.o 00:07:32.579 CC module/accel/error/accel_error.o 00:07:32.579 CC module/accel/ioat/accel_ioat_rpc.o 00:07:32.579 CC module/keyring/linux/keyring.o 00:07:32.579 CC module/accel/error/accel_error_rpc.o 00:07:32.579 CC module/blob/bdev/blob_bdev.o 00:07:32.579 CC module/accel/ioat/accel_ioat.o 00:07:32.579 CC module/keyring/linux/keyring_rpc.o 00:07:32.579 CC module/scheduler/dynamic/scheduler_dynamic.o 00:07:32.579 CC module/accel/dsa/accel_dsa.o 00:07:32.579 CC module/accel/dsa/accel_dsa_rpc.o 00:07:32.579 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:07:32.579 CC module/fsdev/aio/fsdev_aio.o 00:07:32.579 CC module/scheduler/gscheduler/gscheduler.o 00:07:32.579 CC module/fsdev/aio/fsdev_aio_rpc.o 00:07:32.579 CC module/fsdev/aio/linux_aio_mgr.o 00:07:32.579 SO libspdk_env_dpdk_rpc.so.6.0 00:07:32.840 SYMLINK libspdk_env_dpdk_rpc.so 00:07:32.840 LIB libspdk_keyring_file.a 00:07:32.840 LIB libspdk_keyring_linux.a 00:07:32.840 LIB libspdk_scheduler_gscheduler.a 00:07:32.840 LIB libspdk_scheduler_dpdk_governor.a 00:07:32.840 SO libspdk_keyring_linux.so.1.0 00:07:32.840 SO libspdk_scheduler_gscheduler.so.4.0 00:07:32.840 SO libspdk_keyring_file.so.2.0 00:07:32.840 LIB libspdk_accel_ioat.a 00:07:32.840 LIB libspdk_accel_iaa.a 00:07:32.840 LIB libspdk_accel_error.a 00:07:32.840 LIB libspdk_scheduler_dynamic.a 00:07:32.840 SO libspdk_scheduler_dpdk_governor.so.4.0 00:07:32.840 SO libspdk_accel_ioat.so.6.0 00:07:32.840 SO libspdk_accel_error.so.2.0 00:07:32.840 SYMLINK libspdk_keyring_linux.so 00:07:33.101 SO libspdk_accel_iaa.so.3.0 00:07:33.101 SYMLINK libspdk_scheduler_gscheduler.so 00:07:33.101 SO libspdk_scheduler_dynamic.so.4.0 00:07:33.101 SYMLINK libspdk_keyring_file.so 00:07:33.101 SYMLINK libspdk_scheduler_dpdk_governor.so 00:07:33.101 LIB libspdk_blob_bdev.a 00:07:33.101 LIB libspdk_accel_dsa.a 00:07:33.101 SYMLINK libspdk_accel_ioat.so 00:07:33.101 SYMLINK libspdk_accel_error.so 00:07:33.101 SO libspdk_blob_bdev.so.12.0 00:07:33.101 SYMLINK libspdk_accel_iaa.so 00:07:33.101 SYMLINK libspdk_scheduler_dynamic.so 00:07:33.101 SO libspdk_accel_dsa.so.5.0 00:07:33.101 LIB libspdk_vfu_device.a 00:07:33.101 SYMLINK libspdk_blob_bdev.so 00:07:33.101 SO libspdk_vfu_device.so.3.0 00:07:33.101 SYMLINK libspdk_accel_dsa.so 00:07:33.101 SYMLINK libspdk_vfu_device.so 00:07:33.361 LIB libspdk_fsdev_aio.a 00:07:33.361 LIB libspdk_sock_posix.a 00:07:33.361 SO libspdk_fsdev_aio.so.1.0 00:07:33.361 SO libspdk_sock_posix.so.6.0 00:07:33.361 SYMLINK libspdk_fsdev_aio.so 00:07:33.622 SYMLINK libspdk_sock_posix.so 00:07:33.622 CC module/bdev/delay/vbdev_delay.o 00:07:33.622 CC module/bdev/delay/vbdev_delay_rpc.o 00:07:33.622 CC module/bdev/gpt/gpt.o 00:07:33.622 CC module/bdev/nvme/bdev_nvme.o 00:07:33.622 CC module/bdev/gpt/vbdev_gpt.o 00:07:33.622 CC module/bdev/nvme/bdev_nvme_rpc.o 00:07:33.622 CC module/bdev/nvme/nvme_rpc.o 00:07:33.622 CC module/bdev/nvme/bdev_mdns_client.o 00:07:33.622 CC module/bdev/lvol/vbdev_lvol.o 00:07:33.622 CC module/bdev/malloc/bdev_malloc.o 00:07:33.622 CC module/bdev/nvme/vbdev_opal.o 00:07:33.622 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:07:33.622 CC module/blobfs/bdev/blobfs_bdev.o 00:07:33.622 CC module/bdev/nvme/vbdev_opal_rpc.o 00:07:33.622 CC module/bdev/malloc/bdev_malloc_rpc.o 00:07:33.622 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:07:33.622 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:07:33.622 CC module/bdev/error/vbdev_error.o 00:07:33.622 CC module/bdev/error/vbdev_error_rpc.o 00:07:33.622 CC module/bdev/ftl/bdev_ftl.o 00:07:33.622 CC module/bdev/ftl/bdev_ftl_rpc.o 00:07:33.622 CC module/bdev/aio/bdev_aio.o 00:07:33.622 CC module/bdev/aio/bdev_aio_rpc.o 00:07:33.622 CC module/bdev/zone_block/vbdev_zone_block.o 00:07:33.622 CC module/bdev/passthru/vbdev_passthru.o 00:07:33.622 CC module/bdev/null/bdev_null.o 00:07:33.622 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:07:33.622 CC module/bdev/iscsi/bdev_iscsi.o 00:07:33.622 CC module/bdev/null/bdev_null_rpc.o 00:07:33.622 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:07:33.622 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:07:33.622 CC module/bdev/virtio/bdev_virtio_scsi.o 00:07:33.622 CC module/bdev/raid/bdev_raid.o 00:07:33.622 CC module/bdev/virtio/bdev_virtio_rpc.o 00:07:33.622 CC module/bdev/virtio/bdev_virtio_blk.o 00:07:33.622 CC module/bdev/split/vbdev_split.o 00:07:33.622 CC module/bdev/raid/bdev_raid_rpc.o 00:07:33.622 CC module/bdev/split/vbdev_split_rpc.o 00:07:33.622 CC module/bdev/raid/bdev_raid_sb.o 00:07:33.622 CC module/bdev/raid/raid0.o 00:07:33.622 CC module/bdev/raid/raid1.o 00:07:33.622 CC module/bdev/raid/concat.o 00:07:33.883 LIB libspdk_blobfs_bdev.a 00:07:34.144 SO libspdk_blobfs_bdev.so.6.0 00:07:34.144 LIB libspdk_bdev_split.a 00:07:34.144 LIB libspdk_bdev_gpt.a 00:07:34.144 LIB libspdk_bdev_null.a 00:07:34.144 LIB libspdk_bdev_error.a 00:07:34.144 SO libspdk_bdev_split.so.6.0 00:07:34.144 SO libspdk_bdev_null.so.6.0 00:07:34.144 SO libspdk_bdev_gpt.so.6.0 00:07:34.144 LIB libspdk_bdev_passthru.a 00:07:34.144 SYMLINK libspdk_blobfs_bdev.so 00:07:34.144 LIB libspdk_bdev_ftl.a 00:07:34.144 SO libspdk_bdev_error.so.6.0 00:07:34.144 SO libspdk_bdev_passthru.so.6.0 00:07:34.144 SYMLINK libspdk_bdev_split.so 00:07:34.144 LIB libspdk_bdev_zone_block.a 00:07:34.144 SYMLINK libspdk_bdev_null.so 00:07:34.144 LIB libspdk_bdev_malloc.a 00:07:34.144 SYMLINK libspdk_bdev_gpt.so 00:07:34.144 SO libspdk_bdev_ftl.so.6.0 00:07:34.144 LIB libspdk_bdev_aio.a 00:07:34.144 LIB libspdk_bdev_delay.a 00:07:34.144 SO libspdk_bdev_zone_block.so.6.0 00:07:34.144 LIB libspdk_bdev_iscsi.a 00:07:34.144 SYMLINK libspdk_bdev_passthru.so 00:07:34.144 SO libspdk_bdev_malloc.so.6.0 00:07:34.144 SYMLINK libspdk_bdev_error.so 00:07:34.144 SO libspdk_bdev_aio.so.6.0 00:07:34.144 SO libspdk_bdev_delay.so.6.0 00:07:34.144 SYMLINK libspdk_bdev_ftl.so 00:07:34.144 SO libspdk_bdev_iscsi.so.6.0 00:07:34.144 SYMLINK libspdk_bdev_zone_block.so 00:07:34.144 SYMLINK libspdk_bdev_aio.so 00:07:34.405 SYMLINK libspdk_bdev_malloc.so 00:07:34.405 SYMLINK libspdk_bdev_delay.so 00:07:34.405 LIB libspdk_bdev_lvol.a 00:07:34.405 SYMLINK libspdk_bdev_iscsi.so 00:07:34.405 LIB libspdk_bdev_virtio.a 00:07:34.405 SO libspdk_bdev_lvol.so.6.0 00:07:34.405 SO libspdk_bdev_virtio.so.6.0 00:07:34.405 SYMLINK libspdk_bdev_lvol.so 00:07:34.405 SYMLINK libspdk_bdev_virtio.so 00:07:34.665 LIB libspdk_bdev_raid.a 00:07:34.665 SO libspdk_bdev_raid.so.6.0 00:07:34.926 SYMLINK libspdk_bdev_raid.so 00:07:36.309 LIB libspdk_bdev_nvme.a 00:07:36.309 SO libspdk_bdev_nvme.so.7.1 00:07:36.309 SYMLINK libspdk_bdev_nvme.so 00:07:36.881 CC module/event/subsystems/vmd/vmd.o 00:07:36.881 CC module/event/subsystems/vmd/vmd_rpc.o 00:07:36.881 CC module/event/subsystems/iobuf/iobuf.o 00:07:36.881 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:07:36.881 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:07:36.881 CC module/event/subsystems/sock/sock.o 00:07:36.881 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:07:36.881 CC module/event/subsystems/fsdev/fsdev.o 00:07:36.881 CC module/event/subsystems/keyring/keyring.o 00:07:36.881 CC module/event/subsystems/scheduler/scheduler.o 00:07:37.142 LIB libspdk_event_vmd.a 00:07:37.142 LIB libspdk_event_keyring.a 00:07:37.142 LIB libspdk_event_vhost_blk.a 00:07:37.142 LIB libspdk_event_sock.a 00:07:37.142 LIB libspdk_event_vfu_tgt.a 00:07:37.142 LIB libspdk_event_fsdev.a 00:07:37.142 LIB libspdk_event_scheduler.a 00:07:37.142 LIB libspdk_event_iobuf.a 00:07:37.142 SO libspdk_event_keyring.so.1.0 00:07:37.142 SO libspdk_event_vhost_blk.so.3.0 00:07:37.142 SO libspdk_event_vmd.so.6.0 00:07:37.142 SO libspdk_event_vfu_tgt.so.3.0 00:07:37.142 SO libspdk_event_sock.so.5.0 00:07:37.142 SO libspdk_event_fsdev.so.1.0 00:07:37.142 SO libspdk_event_scheduler.so.4.0 00:07:37.142 SO libspdk_event_iobuf.so.3.0 00:07:37.403 SYMLINK libspdk_event_vhost_blk.so 00:07:37.403 SYMLINK libspdk_event_keyring.so 00:07:37.403 SYMLINK libspdk_event_sock.so 00:07:37.403 SYMLINK libspdk_event_vmd.so 00:07:37.403 SYMLINK libspdk_event_vfu_tgt.so 00:07:37.403 SYMLINK libspdk_event_fsdev.so 00:07:37.403 SYMLINK libspdk_event_scheduler.so 00:07:37.403 SYMLINK libspdk_event_iobuf.so 00:07:37.663 CC module/event/subsystems/accel/accel.o 00:07:37.923 LIB libspdk_event_accel.a 00:07:37.923 SO libspdk_event_accel.so.6.0 00:07:37.923 SYMLINK libspdk_event_accel.so 00:07:38.183 CC module/event/subsystems/bdev/bdev.o 00:07:38.444 LIB libspdk_event_bdev.a 00:07:38.444 SO libspdk_event_bdev.so.6.0 00:07:38.704 SYMLINK libspdk_event_bdev.so 00:07:38.964 CC module/event/subsystems/scsi/scsi.o 00:07:38.964 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:07:38.964 CC module/event/subsystems/ublk/ublk.o 00:07:38.964 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:07:38.964 CC module/event/subsystems/nbd/nbd.o 00:07:39.225 LIB libspdk_event_nbd.a 00:07:39.225 LIB libspdk_event_ublk.a 00:07:39.225 LIB libspdk_event_scsi.a 00:07:39.225 SO libspdk_event_nbd.so.6.0 00:07:39.225 SO libspdk_event_ublk.so.3.0 00:07:39.225 SO libspdk_event_scsi.so.6.0 00:07:39.225 LIB libspdk_event_nvmf.a 00:07:39.225 SYMLINK libspdk_event_ublk.so 00:07:39.225 SYMLINK libspdk_event_nbd.so 00:07:39.225 SYMLINK libspdk_event_scsi.so 00:07:39.225 SO libspdk_event_nvmf.so.6.0 00:07:39.225 SYMLINK libspdk_event_nvmf.so 00:07:39.486 CC module/event/subsystems/iscsi/iscsi.o 00:07:39.486 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:07:39.746 LIB libspdk_event_iscsi.a 00:07:39.746 LIB libspdk_event_vhost_scsi.a 00:07:39.746 SO libspdk_event_vhost_scsi.so.3.0 00:07:39.746 SO libspdk_event_iscsi.so.6.0 00:07:40.006 SYMLINK libspdk_event_vhost_scsi.so 00:07:40.006 SYMLINK libspdk_event_iscsi.so 00:07:40.006 SO libspdk.so.6.0 00:07:40.006 SYMLINK libspdk.so 00:07:40.578 CXX app/trace/trace.o 00:07:40.578 CC app/spdk_lspci/spdk_lspci.o 00:07:40.578 CC app/trace_record/trace_record.o 00:07:40.578 CC app/spdk_nvme_identify/identify.o 00:07:40.578 CC app/spdk_top/spdk_top.o 00:07:40.578 CC app/spdk_nvme_discover/discovery_aer.o 00:07:40.578 TEST_HEADER include/spdk/accel.h 00:07:40.578 CC app/spdk_nvme_perf/perf.o 00:07:40.578 TEST_HEADER include/spdk/accel_module.h 00:07:40.578 TEST_HEADER include/spdk/barrier.h 00:07:40.578 TEST_HEADER include/spdk/assert.h 00:07:40.578 CC test/rpc_client/rpc_client_test.o 00:07:40.578 TEST_HEADER include/spdk/base64.h 00:07:40.578 TEST_HEADER include/spdk/bdev.h 00:07:40.578 TEST_HEADER include/spdk/bdev_module.h 00:07:40.578 TEST_HEADER include/spdk/bdev_zone.h 00:07:40.578 TEST_HEADER include/spdk/bit_array.h 00:07:40.578 TEST_HEADER include/spdk/bit_pool.h 00:07:40.578 TEST_HEADER include/spdk/blob_bdev.h 00:07:40.578 TEST_HEADER include/spdk/blobfs_bdev.h 00:07:40.578 TEST_HEADER include/spdk/blobfs.h 00:07:40.578 TEST_HEADER include/spdk/blob.h 00:07:40.578 TEST_HEADER include/spdk/conf.h 00:07:40.578 TEST_HEADER include/spdk/config.h 00:07:40.578 TEST_HEADER include/spdk/cpuset.h 00:07:40.578 TEST_HEADER include/spdk/crc16.h 00:07:40.578 TEST_HEADER include/spdk/crc32.h 00:07:40.578 TEST_HEADER include/spdk/crc64.h 00:07:40.578 TEST_HEADER include/spdk/dif.h 00:07:40.578 CC examples/interrupt_tgt/interrupt_tgt.o 00:07:40.578 TEST_HEADER include/spdk/dma.h 00:07:40.578 TEST_HEADER include/spdk/env_dpdk.h 00:07:40.578 TEST_HEADER include/spdk/endian.h 00:07:40.578 TEST_HEADER include/spdk/env.h 00:07:40.578 TEST_HEADER include/spdk/event.h 00:07:40.578 TEST_HEADER include/spdk/fd_group.h 00:07:40.578 TEST_HEADER include/spdk/file.h 00:07:40.578 TEST_HEADER include/spdk/fd.h 00:07:40.578 CC app/nvmf_tgt/nvmf_main.o 00:07:40.578 TEST_HEADER include/spdk/fsdev.h 00:07:40.578 CC app/spdk_dd/spdk_dd.o 00:07:40.578 TEST_HEADER include/spdk/fsdev_module.h 00:07:40.578 TEST_HEADER include/spdk/ftl.h 00:07:40.578 TEST_HEADER include/spdk/fuse_dispatcher.h 00:07:40.578 TEST_HEADER include/spdk/gpt_spec.h 00:07:40.578 TEST_HEADER include/spdk/hexlify.h 00:07:40.578 CC app/iscsi_tgt/iscsi_tgt.o 00:07:40.578 TEST_HEADER include/spdk/histogram_data.h 00:07:40.578 TEST_HEADER include/spdk/idxd.h 00:07:40.578 TEST_HEADER include/spdk/idxd_spec.h 00:07:40.578 TEST_HEADER include/spdk/ioat.h 00:07:40.578 TEST_HEADER include/spdk/init.h 00:07:40.578 TEST_HEADER include/spdk/ioat_spec.h 00:07:40.578 TEST_HEADER include/spdk/iscsi_spec.h 00:07:40.578 TEST_HEADER include/spdk/json.h 00:07:40.578 TEST_HEADER include/spdk/jsonrpc.h 00:07:40.578 TEST_HEADER include/spdk/keyring.h 00:07:40.578 TEST_HEADER include/spdk/keyring_module.h 00:07:40.578 TEST_HEADER include/spdk/likely.h 00:07:40.578 TEST_HEADER include/spdk/log.h 00:07:40.578 TEST_HEADER include/spdk/lvol.h 00:07:40.578 TEST_HEADER include/spdk/memory.h 00:07:40.578 TEST_HEADER include/spdk/md5.h 00:07:40.578 TEST_HEADER include/spdk/mmio.h 00:07:40.578 CC app/spdk_tgt/spdk_tgt.o 00:07:40.578 TEST_HEADER include/spdk/net.h 00:07:40.578 TEST_HEADER include/spdk/nbd.h 00:07:40.578 TEST_HEADER include/spdk/nvme.h 00:07:40.578 TEST_HEADER include/spdk/notify.h 00:07:40.578 TEST_HEADER include/spdk/nvme_ocssd.h 00:07:40.578 TEST_HEADER include/spdk/nvme_intel.h 00:07:40.578 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:07:40.578 TEST_HEADER include/spdk/nvme_spec.h 00:07:40.578 TEST_HEADER include/spdk/nvmf_cmd.h 00:07:40.578 TEST_HEADER include/spdk/nvme_zns.h 00:07:40.578 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:07:40.578 TEST_HEADER include/spdk/nvmf.h 00:07:40.578 TEST_HEADER include/spdk/nvmf_spec.h 00:07:40.578 TEST_HEADER include/spdk/opal.h 00:07:40.578 TEST_HEADER include/spdk/nvmf_transport.h 00:07:40.578 TEST_HEADER include/spdk/opal_spec.h 00:07:40.578 TEST_HEADER include/spdk/pipe.h 00:07:40.578 TEST_HEADER include/spdk/pci_ids.h 00:07:40.578 TEST_HEADER include/spdk/queue.h 00:07:40.578 TEST_HEADER include/spdk/reduce.h 00:07:40.578 TEST_HEADER include/spdk/rpc.h 00:07:40.578 TEST_HEADER include/spdk/scsi.h 00:07:40.578 TEST_HEADER include/spdk/scheduler.h 00:07:40.578 TEST_HEADER include/spdk/scsi_spec.h 00:07:40.578 TEST_HEADER include/spdk/sock.h 00:07:40.578 TEST_HEADER include/spdk/stdinc.h 00:07:40.578 TEST_HEADER include/spdk/string.h 00:07:40.578 TEST_HEADER include/spdk/thread.h 00:07:40.578 TEST_HEADER include/spdk/trace.h 00:07:40.578 TEST_HEADER include/spdk/tree.h 00:07:40.578 TEST_HEADER include/spdk/trace_parser.h 00:07:40.578 TEST_HEADER include/spdk/ublk.h 00:07:40.578 TEST_HEADER include/spdk/uuid.h 00:07:40.578 TEST_HEADER include/spdk/util.h 00:07:40.578 TEST_HEADER include/spdk/version.h 00:07:40.578 TEST_HEADER include/spdk/vfio_user_pci.h 00:07:40.578 TEST_HEADER include/spdk/vfio_user_spec.h 00:07:40.578 TEST_HEADER include/spdk/vhost.h 00:07:40.578 TEST_HEADER include/spdk/vmd.h 00:07:40.578 TEST_HEADER include/spdk/xor.h 00:07:40.578 TEST_HEADER include/spdk/zipf.h 00:07:40.578 CXX test/cpp_headers/accel.o 00:07:40.578 CXX test/cpp_headers/assert.o 00:07:40.578 CXX test/cpp_headers/accel_module.o 00:07:40.578 CXX test/cpp_headers/barrier.o 00:07:40.578 CXX test/cpp_headers/bdev.o 00:07:40.578 CXX test/cpp_headers/bdev_module.o 00:07:40.578 CXX test/cpp_headers/base64.o 00:07:40.578 CXX test/cpp_headers/bit_array.o 00:07:40.578 CXX test/cpp_headers/bdev_zone.o 00:07:40.578 CXX test/cpp_headers/bit_pool.o 00:07:40.578 CXX test/cpp_headers/blobfs_bdev.o 00:07:40.578 CXX test/cpp_headers/blobfs.o 00:07:40.578 CXX test/cpp_headers/blob_bdev.o 00:07:40.578 CXX test/cpp_headers/blob.o 00:07:40.578 CXX test/cpp_headers/cpuset.o 00:07:40.578 CXX test/cpp_headers/conf.o 00:07:40.578 CXX test/cpp_headers/config.o 00:07:40.578 CXX test/cpp_headers/crc32.o 00:07:40.578 CXX test/cpp_headers/crc16.o 00:07:40.578 CXX test/cpp_headers/dif.o 00:07:40.578 CXX test/cpp_headers/crc64.o 00:07:40.578 CXX test/cpp_headers/dma.o 00:07:40.578 CXX test/cpp_headers/endian.o 00:07:40.578 CXX test/cpp_headers/env_dpdk.o 00:07:40.578 CXX test/cpp_headers/env.o 00:07:40.843 CXX test/cpp_headers/event.o 00:07:40.843 CXX test/cpp_headers/fd_group.o 00:07:40.843 CXX test/cpp_headers/fd.o 00:07:40.843 CXX test/cpp_headers/file.o 00:07:40.843 CXX test/cpp_headers/ftl.o 00:07:40.843 CXX test/cpp_headers/fsdev_module.o 00:07:40.843 CXX test/cpp_headers/fsdev.o 00:07:40.843 CXX test/cpp_headers/fuse_dispatcher.o 00:07:40.843 CXX test/cpp_headers/gpt_spec.o 00:07:40.843 CXX test/cpp_headers/histogram_data.o 00:07:40.843 CXX test/cpp_headers/idxd.o 00:07:40.843 CXX test/cpp_headers/hexlify.o 00:07:40.843 CXX test/cpp_headers/idxd_spec.o 00:07:40.843 CXX test/cpp_headers/init.o 00:07:40.843 CXX test/cpp_headers/ioat.o 00:07:40.843 CXX test/cpp_headers/ioat_spec.o 00:07:40.843 CXX test/cpp_headers/json.o 00:07:40.843 CXX test/cpp_headers/iscsi_spec.o 00:07:40.843 CXX test/cpp_headers/jsonrpc.o 00:07:40.843 CXX test/cpp_headers/keyring_module.o 00:07:40.843 CXX test/cpp_headers/lvol.o 00:07:40.843 CXX test/cpp_headers/keyring.o 00:07:40.843 CXX test/cpp_headers/log.o 00:07:40.843 CXX test/cpp_headers/likely.o 00:07:40.843 CXX test/cpp_headers/memory.o 00:07:40.843 CXX test/cpp_headers/mmio.o 00:07:40.843 CXX test/cpp_headers/md5.o 00:07:40.843 CXX test/cpp_headers/nbd.o 00:07:40.843 CXX test/cpp_headers/net.o 00:07:40.843 CXX test/cpp_headers/nvme_intel.o 00:07:40.843 CXX test/cpp_headers/notify.o 00:07:40.843 CXX test/cpp_headers/nvme_ocssd.o 00:07:40.843 CXX test/cpp_headers/nvme.o 00:07:40.843 CXX test/cpp_headers/nvme_spec.o 00:07:40.843 CXX test/cpp_headers/nvme_ocssd_spec.o 00:07:40.843 CC examples/util/zipf/zipf.o 00:07:40.843 CXX test/cpp_headers/nvme_zns.o 00:07:40.843 CXX test/cpp_headers/nvmf_cmd.o 00:07:40.843 CC examples/ioat/verify/verify.o 00:07:40.843 CC test/app/jsoncat/jsoncat.o 00:07:40.843 CXX test/cpp_headers/nvmf.o 00:07:40.843 CXX test/cpp_headers/nvmf_spec.o 00:07:40.843 CXX test/cpp_headers/nvmf_fc_spec.o 00:07:40.843 CC examples/ioat/perf/perf.o 00:07:40.843 CXX test/cpp_headers/nvmf_transport.o 00:07:40.843 CXX test/cpp_headers/opal_spec.o 00:07:40.843 CXX test/cpp_headers/pci_ids.o 00:07:40.843 CXX test/cpp_headers/opal.o 00:07:40.843 CXX test/cpp_headers/pipe.o 00:07:40.843 CXX test/cpp_headers/queue.o 00:07:40.843 CXX test/cpp_headers/reduce.o 00:07:40.843 CXX test/cpp_headers/rpc.o 00:07:40.843 CC test/env/pci/pci_ut.o 00:07:40.843 LINK spdk_lspci 00:07:40.843 CXX test/cpp_headers/scsi_spec.o 00:07:40.843 CC test/app/stub/stub.o 00:07:40.843 CXX test/cpp_headers/sock.o 00:07:40.843 CXX test/cpp_headers/scheduler.o 00:07:40.843 CXX test/cpp_headers/scsi.o 00:07:40.843 CXX test/cpp_headers/stdinc.o 00:07:40.843 CC test/thread/poller_perf/poller_perf.o 00:07:40.843 CXX test/cpp_headers/string.o 00:07:40.843 CXX test/cpp_headers/thread.o 00:07:40.843 CC test/app/histogram_perf/histogram_perf.o 00:07:40.843 CXX test/cpp_headers/trace.o 00:07:40.843 CXX test/cpp_headers/trace_parser.o 00:07:40.843 CXX test/cpp_headers/ublk.o 00:07:40.843 CXX test/cpp_headers/tree.o 00:07:40.843 CXX test/cpp_headers/uuid.o 00:07:40.843 CC test/env/vtophys/vtophys.o 00:07:40.843 CC test/env/memory/memory_ut.o 00:07:40.843 CXX test/cpp_headers/util.o 00:07:40.843 CXX test/cpp_headers/version.o 00:07:40.843 CXX test/cpp_headers/vfio_user_spec.o 00:07:40.843 CXX test/cpp_headers/vfio_user_pci.o 00:07:40.843 CXX test/cpp_headers/vhost.o 00:07:40.843 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:07:40.843 CXX test/cpp_headers/vmd.o 00:07:40.843 CXX test/cpp_headers/xor.o 00:07:40.843 CXX test/cpp_headers/zipf.o 00:07:40.843 CC app/fio/nvme/fio_plugin.o 00:07:40.843 CC test/dma/test_dma/test_dma.o 00:07:40.843 CC app/fio/bdev/fio_plugin.o 00:07:40.843 CC test/app/bdev_svc/bdev_svc.o 00:07:41.110 LINK interrupt_tgt 00:07:41.110 LINK rpc_client_test 00:07:41.110 LINK spdk_nvme_discover 00:07:41.110 LINK nvmf_tgt 00:07:41.374 LINK iscsi_tgt 00:07:41.374 LINK spdk_trace_record 00:07:41.374 CC test/env/mem_callbacks/mem_callbacks.o 00:07:41.374 LINK spdk_tgt 00:07:41.634 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:07:41.634 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:07:41.634 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:07:41.634 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:07:41.634 LINK ioat_perf 00:07:41.634 LINK spdk_dd 00:07:41.893 LINK jsoncat 00:07:41.893 LINK spdk_trace 00:07:41.893 LINK zipf 00:07:41.893 LINK poller_perf 00:07:41.893 LINK histogram_perf 00:07:41.893 LINK vtophys 00:07:41.893 LINK env_dpdk_post_init 00:07:41.893 LINK stub 00:07:41.893 LINK verify 00:07:41.893 LINK bdev_svc 00:07:42.154 LINK spdk_nvme_perf 00:07:42.154 LINK pci_ut 00:07:42.154 LINK nvme_fuzz 00:07:42.154 LINK vhost_fuzz 00:07:42.154 CC app/vhost/vhost.o 00:07:42.415 LINK spdk_top 00:07:42.415 LINK spdk_bdev 00:07:42.415 LINK test_dma 00:07:42.415 LINK spdk_nvme 00:07:42.415 LINK spdk_nvme_identify 00:07:42.415 LINK mem_callbacks 00:07:42.415 CC examples/sock/hello_world/hello_sock.o 00:07:42.415 CC examples/idxd/perf/perf.o 00:07:42.415 CC examples/vmd/lsvmd/lsvmd.o 00:07:42.415 CC test/event/event_perf/event_perf.o 00:07:42.415 CC examples/vmd/led/led.o 00:07:42.415 CC test/event/reactor_perf/reactor_perf.o 00:07:42.415 CC test/event/reactor/reactor.o 00:07:42.415 CC test/event/app_repeat/app_repeat.o 00:07:42.415 CC test/event/scheduler/scheduler.o 00:07:42.415 CC examples/thread/thread/thread_ex.o 00:07:42.415 LINK vhost 00:07:42.677 LINK led 00:07:42.677 LINK event_perf 00:07:42.677 LINK lsvmd 00:07:42.677 LINK reactor 00:07:42.677 LINK reactor_perf 00:07:42.677 LINK app_repeat 00:07:42.677 LINK hello_sock 00:07:42.677 LINK scheduler 00:07:42.677 LINK idxd_perf 00:07:42.677 LINK thread 00:07:42.938 LINK memory_ut 00:07:42.938 CC test/nvme/simple_copy/simple_copy.o 00:07:42.938 CC test/nvme/err_injection/err_injection.o 00:07:42.938 CC test/nvme/boot_partition/boot_partition.o 00:07:42.938 CC test/nvme/overhead/overhead.o 00:07:42.938 CC test/nvme/reset/reset.o 00:07:42.938 CC test/nvme/cuse/cuse.o 00:07:42.938 CC test/nvme/reserve/reserve.o 00:07:42.938 CC test/nvme/e2edp/nvme_dp.o 00:07:42.938 CC test/nvme/aer/aer.o 00:07:42.938 CC test/nvme/doorbell_aers/doorbell_aers.o 00:07:42.938 CC test/nvme/fdp/fdp.o 00:07:42.938 CC test/nvme/sgl/sgl.o 00:07:42.938 CC test/nvme/startup/startup.o 00:07:42.938 CC test/nvme/connect_stress/connect_stress.o 00:07:42.938 CC test/nvme/compliance/nvme_compliance.o 00:07:42.938 CC test/nvme/fused_ordering/fused_ordering.o 00:07:42.938 CC test/blobfs/mkfs/mkfs.o 00:07:42.938 CC test/accel/dif/dif.o 00:07:43.198 CC test/lvol/esnap/esnap.o 00:07:43.198 LINK boot_partition 00:07:43.198 LINK err_injection 00:07:43.198 LINK startup 00:07:43.198 LINK doorbell_aers 00:07:43.198 LINK reserve 00:07:43.198 LINK connect_stress 00:07:43.198 LINK simple_copy 00:07:43.198 CC examples/nvme/reconnect/reconnect.o 00:07:43.198 CC examples/nvme/cmb_copy/cmb_copy.o 00:07:43.198 LINK iscsi_fuzz 00:07:43.198 CC examples/nvme/hello_world/hello_world.o 00:07:43.198 LINK fused_ordering 00:07:43.198 CC examples/nvme/abort/abort.o 00:07:43.198 LINK mkfs 00:07:43.198 CC examples/nvme/nvme_manage/nvme_manage.o 00:07:43.198 CC examples/nvme/arbitration/arbitration.o 00:07:43.198 CC examples/nvme/hotplug/hotplug.o 00:07:43.198 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:07:43.198 LINK nvme_dp 00:07:43.198 LINK reset 00:07:43.199 LINK overhead 00:07:43.199 LINK sgl 00:07:43.199 LINK aer 00:07:43.459 LINK nvme_compliance 00:07:43.459 LINK fdp 00:07:43.459 CC examples/accel/perf/accel_perf.o 00:07:43.459 CC examples/fsdev/hello_world/hello_fsdev.o 00:07:43.459 CC examples/blob/hello_world/hello_blob.o 00:07:43.459 CC examples/blob/cli/blobcli.o 00:07:43.459 LINK cmb_copy 00:07:43.459 LINK pmr_persistence 00:07:43.459 LINK hello_world 00:07:43.459 LINK hotplug 00:07:43.719 LINK reconnect 00:07:43.719 LINK arbitration 00:07:43.719 LINK abort 00:07:43.719 LINK dif 00:07:43.719 LINK hello_blob 00:07:43.719 LINK nvme_manage 00:07:43.719 LINK hello_fsdev 00:07:43.980 LINK accel_perf 00:07:43.980 LINK blobcli 00:07:44.241 LINK cuse 00:07:44.241 CC test/bdev/bdevio/bdevio.o 00:07:44.502 CC examples/bdev/hello_world/hello_bdev.o 00:07:44.502 CC examples/bdev/bdevperf/bdevperf.o 00:07:44.762 LINK bdevio 00:07:44.762 LINK hello_bdev 00:07:45.334 LINK bdevperf 00:07:45.906 CC examples/nvmf/nvmf/nvmf.o 00:07:46.166 LINK nvmf 00:07:47.552 LINK esnap 00:07:48.123 00:07:48.123 real 0m55.698s 00:07:48.123 user 8m5.408s 00:07:48.123 sys 5m26.329s 00:07:48.123 11:51:12 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:07:48.123 11:51:12 make -- common/autotest_common.sh@10 -- $ set +x 00:07:48.123 ************************************ 00:07:48.123 END TEST make 00:07:48.123 ************************************ 00:07:48.123 11:51:12 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:07:48.123 11:51:12 -- pm/common@29 -- $ signal_monitor_resources TERM 00:07:48.123 11:51:12 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:07:48.123 11:51:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.123 11:51:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:07:48.123 11:51:12 -- pm/common@44 -- $ pid=1034072 00:07:48.123 11:51:12 -- pm/common@50 -- $ kill -TERM 1034072 00:07:48.123 11:51:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.123 11:51:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:07:48.123 11:51:12 -- pm/common@44 -- $ pid=1034073 00:07:48.123 11:51:12 -- pm/common@50 -- $ kill -TERM 1034073 00:07:48.123 11:51:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.123 11:51:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:07:48.123 11:51:12 -- pm/common@44 -- $ pid=1034075 00:07:48.123 11:51:12 -- pm/common@50 -- $ kill -TERM 1034075 00:07:48.123 11:51:12 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.123 11:51:12 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:07:48.123 11:51:12 -- pm/common@44 -- $ pid=1034099 00:07:48.123 11:51:12 -- pm/common@50 -- $ sudo -E kill -TERM 1034099 00:07:48.123 11:51:12 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:07:48.123 11:51:12 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:07:48.123 11:51:13 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:48.123 11:51:13 -- common/autotest_common.sh@1711 -- # lcov --version 00:07:48.123 11:51:13 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:48.123 11:51:13 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:48.123 11:51:13 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.123 11:51:13 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.123 11:51:13 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.123 11:51:13 -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.123 11:51:13 -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.123 11:51:13 -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.123 11:51:13 -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.123 11:51:13 -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.123 11:51:13 -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.123 11:51:13 -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.123 11:51:13 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.123 11:51:13 -- scripts/common.sh@344 -- # case "$op" in 00:07:48.123 11:51:13 -- scripts/common.sh@345 -- # : 1 00:07:48.123 11:51:13 -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.385 11:51:13 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.385 11:51:13 -- scripts/common.sh@365 -- # decimal 1 00:07:48.385 11:51:13 -- scripts/common.sh@353 -- # local d=1 00:07:48.385 11:51:13 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.385 11:51:13 -- scripts/common.sh@355 -- # echo 1 00:07:48.385 11:51:13 -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.385 11:51:13 -- scripts/common.sh@366 -- # decimal 2 00:07:48.385 11:51:13 -- scripts/common.sh@353 -- # local d=2 00:07:48.385 11:51:13 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.385 11:51:13 -- scripts/common.sh@355 -- # echo 2 00:07:48.385 11:51:13 -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.385 11:51:13 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.386 11:51:13 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.386 11:51:13 -- scripts/common.sh@368 -- # return 0 00:07:48.386 11:51:13 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.386 11:51:13 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:48.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.386 --rc genhtml_branch_coverage=1 00:07:48.386 --rc genhtml_function_coverage=1 00:07:48.386 --rc genhtml_legend=1 00:07:48.386 --rc geninfo_all_blocks=1 00:07:48.386 --rc geninfo_unexecuted_blocks=1 00:07:48.386 00:07:48.386 ' 00:07:48.386 11:51:13 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:48.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.386 --rc genhtml_branch_coverage=1 00:07:48.386 --rc genhtml_function_coverage=1 00:07:48.386 --rc genhtml_legend=1 00:07:48.386 --rc geninfo_all_blocks=1 00:07:48.386 --rc geninfo_unexecuted_blocks=1 00:07:48.386 00:07:48.386 ' 00:07:48.386 11:51:13 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:48.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.386 --rc genhtml_branch_coverage=1 00:07:48.386 --rc genhtml_function_coverage=1 00:07:48.386 --rc genhtml_legend=1 00:07:48.386 --rc geninfo_all_blocks=1 00:07:48.386 --rc geninfo_unexecuted_blocks=1 00:07:48.386 00:07:48.386 ' 00:07:48.386 11:51:13 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:48.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.386 --rc genhtml_branch_coverage=1 00:07:48.386 --rc genhtml_function_coverage=1 00:07:48.386 --rc genhtml_legend=1 00:07:48.386 --rc geninfo_all_blocks=1 00:07:48.386 --rc geninfo_unexecuted_blocks=1 00:07:48.386 00:07:48.386 ' 00:07:48.386 11:51:13 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:48.386 11:51:13 -- nvmf/common.sh@7 -- # uname -s 00:07:48.386 11:51:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:48.386 11:51:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:48.386 11:51:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:48.386 11:51:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:48.386 11:51:13 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:48.386 11:51:13 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:48.386 11:51:13 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:48.386 11:51:13 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:48.386 11:51:13 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.386 11:51:13 -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:48.386 11:51:13 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:48.386 11:51:13 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:48.386 11:51:13 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:48.386 11:51:13 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:48.386 11:51:13 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:48.386 11:51:13 -- scripts/common.sh@15 -- # shopt -s extglob 00:07:48.386 11:51:13 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:48.386 11:51:13 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:48.386 11:51:13 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:48.386 11:51:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.386 11:51:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.386 11:51:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.386 11:51:13 -- paths/export.sh@5 -- # export PATH 00:07:48.386 11:51:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:48.386 11:51:13 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:48.386 11:51:13 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:48.386 11:51:13 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:48.386 11:51:13 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:48.386 11:51:13 -- nvmf/common.sh@50 -- # : 0 00:07:48.386 11:51:13 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:48.386 11:51:13 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:48.386 11:51:13 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:48.386 11:51:13 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:48.386 11:51:13 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:48.386 11:51:13 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:48.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:48.386 11:51:13 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:48.386 11:51:13 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:48.386 11:51:13 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:48.386 11:51:13 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:07:48.386 11:51:13 -- spdk/autotest.sh@32 -- # uname -s 00:07:48.386 11:51:13 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:07:48.386 11:51:13 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:07:48.386 11:51:13 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:48.386 11:51:13 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:07:48.386 11:51:13 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:07:48.386 11:51:13 -- spdk/autotest.sh@44 -- # modprobe nbd 00:07:48.386 11:51:13 -- spdk/autotest.sh@46 -- # type -P udevadm 00:07:48.386 11:51:13 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:07:48.386 11:51:13 -- spdk/autotest.sh@48 -- # udevadm_pid=1100209 00:07:48.386 11:51:13 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:07:48.386 11:51:13 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:07:48.386 11:51:13 -- pm/common@17 -- # local monitor 00:07:48.386 11:51:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.386 11:51:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.386 11:51:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.386 11:51:13 -- pm/common@21 -- # date +%s 00:07:48.386 11:51:13 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:07:48.386 11:51:13 -- pm/common@21 -- # date +%s 00:07:48.386 11:51:13 -- pm/common@25 -- # sleep 1 00:07:48.386 11:51:13 -- pm/common@21 -- # date +%s 00:07:48.386 11:51:13 -- pm/common@21 -- # date +%s 00:07:48.386 11:51:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395873 00:07:48.386 11:51:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395873 00:07:48.386 11:51:13 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395873 00:07:48.386 11:51:13 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1733395873 00:07:48.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395873_collect-cpu-load.pm.log 00:07:48.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395873_collect-vmstat.pm.log 00:07:48.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395873_collect-cpu-temp.pm.log 00:07:48.387 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1733395873_collect-bmc-pm.bmc.pm.log 00:07:49.331 11:51:14 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:07:49.331 11:51:14 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:07:49.331 11:51:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:49.331 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:07:49.331 11:51:14 -- spdk/autotest.sh@59 -- # create_test_list 00:07:49.331 11:51:14 -- common/autotest_common.sh@752 -- # xtrace_disable 00:07:49.331 11:51:14 -- common/autotest_common.sh@10 -- # set +x 00:07:49.331 11:51:14 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:07:49.331 11:51:14 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:49.331 11:51:14 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:49.331 11:51:14 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:07:49.331 11:51:14 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:49.331 11:51:14 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:07:49.331 11:51:14 -- common/autotest_common.sh@1457 -- # uname 00:07:49.331 11:51:14 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:07:49.331 11:51:14 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:07:49.331 11:51:14 -- common/autotest_common.sh@1477 -- # uname 00:07:49.331 11:51:14 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:07:49.331 11:51:14 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:07:49.331 11:51:14 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:07:49.593 lcov: LCOV version 1.15 00:07:49.593 11:51:14 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:08:04.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:08:04.501 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:08:22.771 11:51:44 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:08:22.771 11:51:44 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:22.771 11:51:44 -- common/autotest_common.sh@10 -- # set +x 00:08:22.771 11:51:44 -- spdk/autotest.sh@78 -- # rm -f 00:08:22.771 11:51:44 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:23.713 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:65:00.0 (144d a80a): Already using the nvme driver 00:08:23.713 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:08:23.713 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:08:23.974 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:08:23.974 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:08:23.974 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:08:23.974 11:51:48 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:08:23.974 11:51:48 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:08:23.974 11:51:48 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:08:23.974 11:51:48 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:08:23.974 11:51:48 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:08:23.974 11:51:48 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:08:23.974 11:51:48 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:08:23.974 11:51:48 -- common/autotest_common.sh@1669 -- # bdf=0000:65:00.0 00:08:23.974 11:51:48 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:08:23.974 11:51:48 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:08:23.974 11:51:48 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:08:23.974 11:51:48 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:23.974 11:51:48 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:08:23.974 11:51:48 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:08:23.974 11:51:48 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:08:23.974 11:51:48 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:08:23.974 11:51:48 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:08:23.974 11:51:48 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:08:23.974 11:51:48 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:08:23.974 No valid GPT data, bailing 00:08:23.974 11:51:48 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:08:23.974 11:51:48 -- scripts/common.sh@394 -- # pt= 00:08:23.974 11:51:48 -- scripts/common.sh@395 -- # return 1 00:08:23.974 11:51:48 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:08:23.974 1+0 records in 00:08:23.974 1+0 records out 00:08:23.974 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00191305 s, 548 MB/s 00:08:23.974 11:51:48 -- spdk/autotest.sh@105 -- # sync 00:08:23.974 11:51:48 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:08:23.974 11:51:48 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:08:23.974 11:51:48 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:08:33.973 11:51:57 -- spdk/autotest.sh@111 -- # uname -s 00:08:33.973 11:51:57 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:08:33.973 11:51:57 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:08:33.973 11:51:57 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:08:35.886 Hugepages 00:08:35.886 node hugesize free / total 00:08:35.886 node0 1048576kB 0 / 0 00:08:35.886 node0 2048kB 0 / 0 00:08:35.886 node1 1048576kB 0 / 0 00:08:35.886 node1 2048kB 0 / 0 00:08:35.886 00:08:35.886 Type BDF Vendor Device NUMA Driver Device Block devices 00:08:35.886 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:08:35.886 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:08:36.147 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:08:36.147 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:08:36.147 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:08:36.147 11:52:01 -- spdk/autotest.sh@117 -- # uname -s 00:08:36.147 11:52:01 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:08:36.147 11:52:01 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:08:36.147 11:52:01 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:39.456 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:39.456 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:39.456 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:39.456 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:39.456 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:39.456 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:39.456 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:39.717 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:41.626 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:41.626 11:52:06 -- common/autotest_common.sh@1517 -- # sleep 1 00:08:42.566 11:52:07 -- common/autotest_common.sh@1518 -- # bdfs=() 00:08:42.566 11:52:07 -- common/autotest_common.sh@1518 -- # local bdfs 00:08:42.566 11:52:07 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:08:42.566 11:52:07 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:08:42.566 11:52:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:42.566 11:52:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:42.566 11:52:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:42.566 11:52:07 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:42.566 11:52:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:42.566 11:52:07 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:42.566 11:52:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:42.566 11:52:07 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:08:45.918 Waiting for block devices as requested 00:08:45.918 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:46.178 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:46.178 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:46.178 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:46.438 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:46.438 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:46.438 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:46.725 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:46.725 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:08:46.985 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:08:46.985 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:08:46.985 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:08:46.985 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:08:47.246 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:08:47.246 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:08:47.246 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:08:47.506 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:08:47.506 11:52:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:08:47.506 11:52:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1487 -- # grep 0000:65:00.0/nvme/nvme 00:08:47.506 11:52:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:08:47.506 11:52:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:08:47.506 11:52:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:08:47.506 11:52:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:08:47.506 11:52:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x5f' 00:08:47.506 11:52:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:08:47.506 11:52:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:08:47.506 11:52:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:08:47.506 11:52:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:08:47.506 11:52:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:08:47.506 11:52:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:08:47.506 11:52:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:08:47.506 11:52:12 -- common/autotest_common.sh@1543 -- # continue 00:08:47.506 11:52:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:08:47.506 11:52:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:47.506 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:08:47.506 11:52:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:08:47.506 11:52:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.506 11:52:12 -- common/autotest_common.sh@10 -- # set +x 00:08:47.506 11:52:12 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:08:51.717 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:08:51.718 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:08:51.718 11:52:16 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:08:51.718 11:52:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:51.718 11:52:16 -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 11:52:16 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:08:51.718 11:52:16 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:08:51.718 11:52:16 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:08:51.718 11:52:16 -- common/autotest_common.sh@1563 -- # bdfs=() 00:08:51.718 11:52:16 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:08:51.718 11:52:16 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:08:51.718 11:52:16 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:08:51.718 11:52:16 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:08:51.718 11:52:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:51.718 11:52:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:08:51.718 11:52:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:51.718 11:52:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:08:51.718 11:52:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:51.718 11:52:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:08:51.718 11:52:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:08:51.718 11:52:16 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:08:51.718 11:52:16 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:08:51.718 11:52:16 -- common/autotest_common.sh@1566 -- # device=0xa80a 00:08:51.718 11:52:16 -- common/autotest_common.sh@1567 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:08:51.718 11:52:16 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:08:51.718 11:52:16 -- common/autotest_common.sh@1572 -- # return 0 00:08:51.718 11:52:16 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:08:51.718 11:52:16 -- common/autotest_common.sh@1580 -- # return 0 00:08:51.718 11:52:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:08:51.718 11:52:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:08:51.718 11:52:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:51.718 11:52:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:08:51.718 11:52:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:08:51.718 11:52:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:51.718 11:52:16 -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 11:52:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:08:51.718 11:52:16 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:51.718 11:52:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.718 11:52:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.718 11:52:16 -- common/autotest_common.sh@10 -- # set +x 00:08:51.718 ************************************ 00:08:51.718 START TEST env 00:08:51.718 ************************************ 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:08:51.718 * Looking for test storage... 00:08:51.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1711 -- # lcov --version 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:51.718 11:52:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:51.718 11:52:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:51.718 11:52:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:51.718 11:52:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:08:51.718 11:52:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:08:51.718 11:52:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:08:51.718 11:52:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:08:51.718 11:52:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:08:51.718 11:52:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:08:51.718 11:52:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:08:51.718 11:52:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:51.718 11:52:16 env -- scripts/common.sh@344 -- # case "$op" in 00:08:51.718 11:52:16 env -- scripts/common.sh@345 -- # : 1 00:08:51.718 11:52:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:51.718 11:52:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:51.718 11:52:16 env -- scripts/common.sh@365 -- # decimal 1 00:08:51.718 11:52:16 env -- scripts/common.sh@353 -- # local d=1 00:08:51.718 11:52:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:51.718 11:52:16 env -- scripts/common.sh@355 -- # echo 1 00:08:51.718 11:52:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:08:51.718 11:52:16 env -- scripts/common.sh@366 -- # decimal 2 00:08:51.718 11:52:16 env -- scripts/common.sh@353 -- # local d=2 00:08:51.718 11:52:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:51.718 11:52:16 env -- scripts/common.sh@355 -- # echo 2 00:08:51.718 11:52:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:08:51.718 11:52:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:51.718 11:52:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:51.718 11:52:16 env -- scripts/common.sh@368 -- # return 0 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:51.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.718 --rc genhtml_branch_coverage=1 00:08:51.718 --rc genhtml_function_coverage=1 00:08:51.718 --rc genhtml_legend=1 00:08:51.718 --rc geninfo_all_blocks=1 00:08:51.718 --rc geninfo_unexecuted_blocks=1 00:08:51.718 00:08:51.718 ' 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:51.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.718 --rc genhtml_branch_coverage=1 00:08:51.718 --rc genhtml_function_coverage=1 00:08:51.718 --rc genhtml_legend=1 00:08:51.718 --rc geninfo_all_blocks=1 00:08:51.718 --rc geninfo_unexecuted_blocks=1 00:08:51.718 00:08:51.718 ' 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:51.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.718 --rc genhtml_branch_coverage=1 00:08:51.718 --rc genhtml_function_coverage=1 00:08:51.718 --rc genhtml_legend=1 00:08:51.718 --rc geninfo_all_blocks=1 00:08:51.718 --rc geninfo_unexecuted_blocks=1 00:08:51.718 00:08:51.718 ' 00:08:51.718 11:52:16 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:51.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:51.718 --rc genhtml_branch_coverage=1 00:08:51.718 --rc genhtml_function_coverage=1 00:08:51.718 --rc genhtml_legend=1 00:08:51.718 --rc geninfo_all_blocks=1 00:08:51.718 --rc geninfo_unexecuted_blocks=1 00:08:51.719 00:08:51.719 ' 00:08:51.719 11:52:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:51.719 11:52:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.719 11:52:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.719 11:52:16 env -- common/autotest_common.sh@10 -- # set +x 00:08:51.719 ************************************ 00:08:51.719 START TEST env_memory 00:08:51.719 ************************************ 00:08:51.719 11:52:16 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:08:51.719 00:08:51.719 00:08:51.719 CUnit - A unit testing framework for C - Version 2.1-3 00:08:51.719 http://cunit.sourceforge.net/ 00:08:51.719 00:08:51.719 00:08:51.719 Suite: memory 00:08:51.719 Test: alloc and free memory map ...[2024-12-05 11:52:16.700600] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:08:51.719 passed 00:08:51.719 Test: mem map translation ...[2024-12-05 11:52:16.726118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:08:51.719 [2024-12-05 11:52:16.726148] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:08:51.719 [2024-12-05 11:52:16.726199] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:08:51.719 [2024-12-05 11:52:16.726206] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:08:51.981 passed 00:08:51.981 Test: mem map registration ...[2024-12-05 11:52:16.781622] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:08:51.981 [2024-12-05 11:52:16.781657] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:08:51.981 passed 00:08:51.981 Test: mem map adjacent registrations ...passed 00:08:51.981 00:08:51.981 Run Summary: Type Total Ran Passed Failed Inactive 00:08:51.981 suites 1 1 n/a 0 0 00:08:51.981 tests 4 4 4 0 0 00:08:51.981 asserts 152 152 152 0 n/a 00:08:51.981 00:08:51.981 Elapsed time = 0.194 seconds 00:08:51.981 00:08:51.981 real 0m0.209s 00:08:51.981 user 0m0.200s 00:08:51.981 sys 0m0.008s 00:08:51.981 11:52:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.981 11:52:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:08:51.981 ************************************ 00:08:51.981 END TEST env_memory 00:08:51.981 ************************************ 00:08:51.981 11:52:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:51.981 11:52:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.981 11:52:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.981 11:52:16 env -- common/autotest_common.sh@10 -- # set +x 00:08:51.981 ************************************ 00:08:51.981 START TEST env_vtophys 00:08:51.981 ************************************ 00:08:51.981 11:52:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:08:51.981 EAL: lib.eal log level changed from notice to debug 00:08:51.981 EAL: Detected lcore 0 as core 0 on socket 0 00:08:51.981 EAL: Detected lcore 1 as core 1 on socket 0 00:08:51.981 EAL: Detected lcore 2 as core 2 on socket 0 00:08:51.981 EAL: Detected lcore 3 as core 3 on socket 0 00:08:51.981 EAL: Detected lcore 4 as core 4 on socket 0 00:08:51.981 EAL: Detected lcore 5 as core 5 on socket 0 00:08:51.981 EAL: Detected lcore 6 as core 6 on socket 0 00:08:51.981 EAL: Detected lcore 7 as core 7 on socket 0 00:08:51.981 EAL: Detected lcore 8 as core 8 on socket 0 00:08:51.981 EAL: Detected lcore 9 as core 9 on socket 0 00:08:51.981 EAL: Detected lcore 10 as core 10 on socket 0 00:08:51.981 EAL: Detected lcore 11 as core 11 on socket 0 00:08:51.981 EAL: Detected lcore 12 as core 12 on socket 0 00:08:51.981 EAL: Detected lcore 13 as core 13 on socket 0 00:08:51.981 EAL: Detected lcore 14 as core 14 on socket 0 00:08:51.981 EAL: Detected lcore 15 as core 15 on socket 0 00:08:51.981 EAL: Detected lcore 16 as core 16 on socket 0 00:08:51.981 EAL: Detected lcore 17 as core 17 on socket 0 00:08:51.981 EAL: Detected lcore 18 as core 18 on socket 0 00:08:51.981 EAL: Detected lcore 19 as core 19 on socket 0 00:08:51.981 EAL: Detected lcore 20 as core 20 on socket 0 00:08:51.981 EAL: Detected lcore 21 as core 21 on socket 0 00:08:51.981 EAL: Detected lcore 22 as core 22 on socket 0 00:08:51.981 EAL: Detected lcore 23 as core 23 on socket 0 00:08:51.981 EAL: Detected lcore 24 as core 24 on socket 0 00:08:51.981 EAL: Detected lcore 25 as core 25 on socket 0 00:08:51.981 EAL: Detected lcore 26 as core 26 on socket 0 00:08:51.981 EAL: Detected lcore 27 as core 27 on socket 0 00:08:51.981 EAL: Detected lcore 28 as core 28 on socket 0 00:08:51.981 EAL: Detected lcore 29 as core 29 on socket 0 00:08:51.981 EAL: Detected lcore 30 as core 30 on socket 0 00:08:51.981 EAL: Detected lcore 31 as core 31 on socket 0 00:08:51.981 EAL: Detected lcore 32 as core 32 on socket 0 00:08:51.981 EAL: Detected lcore 33 as core 33 on socket 0 00:08:51.981 EAL: Detected lcore 34 as core 34 on socket 0 00:08:51.981 EAL: Detected lcore 35 as core 35 on socket 0 00:08:51.981 EAL: Detected lcore 36 as core 0 on socket 1 00:08:51.981 EAL: Detected lcore 37 as core 1 on socket 1 00:08:51.981 EAL: Detected lcore 38 as core 2 on socket 1 00:08:51.981 EAL: Detected lcore 39 as core 3 on socket 1 00:08:51.981 EAL: Detected lcore 40 as core 4 on socket 1 00:08:51.981 EAL: Detected lcore 41 as core 5 on socket 1 00:08:51.981 EAL: Detected lcore 42 as core 6 on socket 1 00:08:51.981 EAL: Detected lcore 43 as core 7 on socket 1 00:08:51.981 EAL: Detected lcore 44 as core 8 on socket 1 00:08:51.981 EAL: Detected lcore 45 as core 9 on socket 1 00:08:51.981 EAL: Detected lcore 46 as core 10 on socket 1 00:08:51.981 EAL: Detected lcore 47 as core 11 on socket 1 00:08:51.981 EAL: Detected lcore 48 as core 12 on socket 1 00:08:51.981 EAL: Detected lcore 49 as core 13 on socket 1 00:08:51.981 EAL: Detected lcore 50 as core 14 on socket 1 00:08:51.981 EAL: Detected lcore 51 as core 15 on socket 1 00:08:51.981 EAL: Detected lcore 52 as core 16 on socket 1 00:08:51.981 EAL: Detected lcore 53 as core 17 on socket 1 00:08:51.981 EAL: Detected lcore 54 as core 18 on socket 1 00:08:51.981 EAL: Detected lcore 55 as core 19 on socket 1 00:08:51.981 EAL: Detected lcore 56 as core 20 on socket 1 00:08:51.981 EAL: Detected lcore 57 as core 21 on socket 1 00:08:51.981 EAL: Detected lcore 58 as core 22 on socket 1 00:08:51.981 EAL: Detected lcore 59 as core 23 on socket 1 00:08:51.981 EAL: Detected lcore 60 as core 24 on socket 1 00:08:51.981 EAL: Detected lcore 61 as core 25 on socket 1 00:08:51.981 EAL: Detected lcore 62 as core 26 on socket 1 00:08:51.981 EAL: Detected lcore 63 as core 27 on socket 1 00:08:51.981 EAL: Detected lcore 64 as core 28 on socket 1 00:08:51.981 EAL: Detected lcore 65 as core 29 on socket 1 00:08:51.981 EAL: Detected lcore 66 as core 30 on socket 1 00:08:51.981 EAL: Detected lcore 67 as core 31 on socket 1 00:08:51.981 EAL: Detected lcore 68 as core 32 on socket 1 00:08:51.981 EAL: Detected lcore 69 as core 33 on socket 1 00:08:51.981 EAL: Detected lcore 70 as core 34 on socket 1 00:08:51.981 EAL: Detected lcore 71 as core 35 on socket 1 00:08:51.981 EAL: Detected lcore 72 as core 0 on socket 0 00:08:51.981 EAL: Detected lcore 73 as core 1 on socket 0 00:08:51.981 EAL: Detected lcore 74 as core 2 on socket 0 00:08:51.981 EAL: Detected lcore 75 as core 3 on socket 0 00:08:51.981 EAL: Detected lcore 76 as core 4 on socket 0 00:08:51.981 EAL: Detected lcore 77 as core 5 on socket 0 00:08:51.981 EAL: Detected lcore 78 as core 6 on socket 0 00:08:51.981 EAL: Detected lcore 79 as core 7 on socket 0 00:08:51.981 EAL: Detected lcore 80 as core 8 on socket 0 00:08:51.981 EAL: Detected lcore 81 as core 9 on socket 0 00:08:51.981 EAL: Detected lcore 82 as core 10 on socket 0 00:08:51.981 EAL: Detected lcore 83 as core 11 on socket 0 00:08:51.981 EAL: Detected lcore 84 as core 12 on socket 0 00:08:51.982 EAL: Detected lcore 85 as core 13 on socket 0 00:08:51.982 EAL: Detected lcore 86 as core 14 on socket 0 00:08:51.982 EAL: Detected lcore 87 as core 15 on socket 0 00:08:51.982 EAL: Detected lcore 88 as core 16 on socket 0 00:08:51.982 EAL: Detected lcore 89 as core 17 on socket 0 00:08:51.982 EAL: Detected lcore 90 as core 18 on socket 0 00:08:51.982 EAL: Detected lcore 91 as core 19 on socket 0 00:08:51.982 EAL: Detected lcore 92 as core 20 on socket 0 00:08:51.982 EAL: Detected lcore 93 as core 21 on socket 0 00:08:51.982 EAL: Detected lcore 94 as core 22 on socket 0 00:08:51.982 EAL: Detected lcore 95 as core 23 on socket 0 00:08:51.982 EAL: Detected lcore 96 as core 24 on socket 0 00:08:51.982 EAL: Detected lcore 97 as core 25 on socket 0 00:08:51.982 EAL: Detected lcore 98 as core 26 on socket 0 00:08:51.982 EAL: Detected lcore 99 as core 27 on socket 0 00:08:51.982 EAL: Detected lcore 100 as core 28 on socket 0 00:08:51.982 EAL: Detected lcore 101 as core 29 on socket 0 00:08:51.982 EAL: Detected lcore 102 as core 30 on socket 0 00:08:51.982 EAL: Detected lcore 103 as core 31 on socket 0 00:08:51.982 EAL: Detected lcore 104 as core 32 on socket 0 00:08:51.982 EAL: Detected lcore 105 as core 33 on socket 0 00:08:51.982 EAL: Detected lcore 106 as core 34 on socket 0 00:08:51.982 EAL: Detected lcore 107 as core 35 on socket 0 00:08:51.982 EAL: Detected lcore 108 as core 0 on socket 1 00:08:51.982 EAL: Detected lcore 109 as core 1 on socket 1 00:08:51.982 EAL: Detected lcore 110 as core 2 on socket 1 00:08:51.982 EAL: Detected lcore 111 as core 3 on socket 1 00:08:51.982 EAL: Detected lcore 112 as core 4 on socket 1 00:08:51.982 EAL: Detected lcore 113 as core 5 on socket 1 00:08:51.982 EAL: Detected lcore 114 as core 6 on socket 1 00:08:51.982 EAL: Detected lcore 115 as core 7 on socket 1 00:08:51.982 EAL: Detected lcore 116 as core 8 on socket 1 00:08:51.982 EAL: Detected lcore 117 as core 9 on socket 1 00:08:51.982 EAL: Detected lcore 118 as core 10 on socket 1 00:08:51.982 EAL: Detected lcore 119 as core 11 on socket 1 00:08:51.982 EAL: Detected lcore 120 as core 12 on socket 1 00:08:51.982 EAL: Detected lcore 121 as core 13 on socket 1 00:08:51.982 EAL: Detected lcore 122 as core 14 on socket 1 00:08:51.982 EAL: Detected lcore 123 as core 15 on socket 1 00:08:51.982 EAL: Detected lcore 124 as core 16 on socket 1 00:08:51.982 EAL: Detected lcore 125 as core 17 on socket 1 00:08:51.982 EAL: Detected lcore 126 as core 18 on socket 1 00:08:51.982 EAL: Detected lcore 127 as core 19 on socket 1 00:08:51.982 EAL: Skipped lcore 128 as core 20 on socket 1 00:08:51.982 EAL: Skipped lcore 129 as core 21 on socket 1 00:08:51.982 EAL: Skipped lcore 130 as core 22 on socket 1 00:08:51.982 EAL: Skipped lcore 131 as core 23 on socket 1 00:08:51.982 EAL: Skipped lcore 132 as core 24 on socket 1 00:08:51.982 EAL: Skipped lcore 133 as core 25 on socket 1 00:08:51.982 EAL: Skipped lcore 134 as core 26 on socket 1 00:08:51.982 EAL: Skipped lcore 135 as core 27 on socket 1 00:08:51.982 EAL: Skipped lcore 136 as core 28 on socket 1 00:08:51.982 EAL: Skipped lcore 137 as core 29 on socket 1 00:08:51.982 EAL: Skipped lcore 138 as core 30 on socket 1 00:08:51.982 EAL: Skipped lcore 139 as core 31 on socket 1 00:08:51.982 EAL: Skipped lcore 140 as core 32 on socket 1 00:08:51.982 EAL: Skipped lcore 141 as core 33 on socket 1 00:08:51.982 EAL: Skipped lcore 142 as core 34 on socket 1 00:08:51.982 EAL: Skipped lcore 143 as core 35 on socket 1 00:08:51.982 EAL: Maximum logical cores by configuration: 128 00:08:51.982 EAL: Detected CPU lcores: 128 00:08:51.982 EAL: Detected NUMA nodes: 2 00:08:51.982 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:08:51.982 EAL: Detected shared linkage of DPDK 00:08:51.982 EAL: No shared files mode enabled, IPC will be disabled 00:08:51.982 EAL: Bus pci wants IOVA as 'DC' 00:08:51.982 EAL: Buses did not request a specific IOVA mode. 00:08:51.982 EAL: IOMMU is available, selecting IOVA as VA mode. 00:08:51.982 EAL: Selected IOVA mode 'VA' 00:08:51.982 EAL: Probing VFIO support... 00:08:51.982 EAL: IOMMU type 1 (Type 1) is supported 00:08:51.982 EAL: IOMMU type 7 (sPAPR) is not supported 00:08:51.982 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:08:51.982 EAL: VFIO support initialized 00:08:51.982 EAL: Ask a virtual area of 0x2e000 bytes 00:08:51.982 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:08:51.982 EAL: Setting up physically contiguous memory... 00:08:51.982 EAL: Setting maximum number of open files to 524288 00:08:51.982 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:08:51.982 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:08:51.982 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:08:51.982 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:08:51.982 EAL: Ask a virtual area of 0x61000 bytes 00:08:51.982 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:08:51.982 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:08:51.982 EAL: Ask a virtual area of 0x400000000 bytes 00:08:51.982 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:08:51.982 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:08:51.982 EAL: Hugepages will be freed exactly as allocated. 00:08:51.982 EAL: No shared files mode enabled, IPC is disabled 00:08:51.982 EAL: No shared files mode enabled, IPC is disabled 00:08:51.982 EAL: TSC frequency is ~2400000 KHz 00:08:51.982 EAL: Main lcore 0 is ready (tid=7f67acff9a00;cpuset=[0]) 00:08:51.982 EAL: Trying to obtain current memory policy. 00:08:51.982 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:51.982 EAL: Restoring previous memory policy: 0 00:08:51.982 EAL: request: mp_malloc_sync 00:08:51.982 EAL: No shared files mode enabled, IPC is disabled 00:08:51.982 EAL: Heap on socket 0 was expanded by 2MB 00:08:51.982 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: No PCI address specified using 'addr=' in: bus=pci 00:08:52.245 EAL: Mem event callback 'spdk:(nil)' registered 00:08:52.245 00:08:52.245 00:08:52.245 CUnit - A unit testing framework for C - Version 2.1-3 00:08:52.245 http://cunit.sourceforge.net/ 00:08:52.245 00:08:52.245 00:08:52.245 Suite: components_suite 00:08:52.245 Test: vtophys_malloc_test ...passed 00:08:52.245 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 4MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 4MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 6MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 6MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 10MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 10MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 18MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 18MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 34MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 34MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 66MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 66MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 130MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 130MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.245 EAL: Restoring previous memory policy: 4 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was expanded by 258MB 00:08:52.245 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.245 EAL: request: mp_malloc_sync 00:08:52.245 EAL: No shared files mode enabled, IPC is disabled 00:08:52.245 EAL: Heap on socket 0 was shrunk by 258MB 00:08:52.245 EAL: Trying to obtain current memory policy. 00:08:52.245 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.505 EAL: Restoring previous memory policy: 4 00:08:52.505 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.505 EAL: request: mp_malloc_sync 00:08:52.505 EAL: No shared files mode enabled, IPC is disabled 00:08:52.505 EAL: Heap on socket 0 was expanded by 514MB 00:08:52.505 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.505 EAL: request: mp_malloc_sync 00:08:52.505 EAL: No shared files mode enabled, IPC is disabled 00:08:52.505 EAL: Heap on socket 0 was shrunk by 514MB 00:08:52.505 EAL: Trying to obtain current memory policy. 00:08:52.505 EAL: Setting policy MPOL_PREFERRED for socket 0 00:08:52.766 EAL: Restoring previous memory policy: 4 00:08:52.766 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.766 EAL: request: mp_malloc_sync 00:08:52.766 EAL: No shared files mode enabled, IPC is disabled 00:08:52.766 EAL: Heap on socket 0 was expanded by 1026MB 00:08:52.766 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.766 EAL: request: mp_malloc_sync 00:08:52.766 EAL: No shared files mode enabled, IPC is disabled 00:08:52.766 EAL: Heap on socket 0 was shrunk by 1026MB 00:08:52.766 passed 00:08:52.766 00:08:52.766 Run Summary: Type Total Ran Passed Failed Inactive 00:08:52.766 suites 1 1 n/a 0 0 00:08:52.766 tests 2 2 2 0 0 00:08:52.766 asserts 497 497 497 0 n/a 00:08:52.766 00:08:52.766 Elapsed time = 0.692 seconds 00:08:52.766 EAL: Calling mem event callback 'spdk:(nil)' 00:08:52.766 EAL: request: mp_malloc_sync 00:08:52.766 EAL: No shared files mode enabled, IPC is disabled 00:08:52.766 EAL: Heap on socket 0 was shrunk by 2MB 00:08:52.766 EAL: No shared files mode enabled, IPC is disabled 00:08:52.766 EAL: No shared files mode enabled, IPC is disabled 00:08:52.766 EAL: No shared files mode enabled, IPC is disabled 00:08:52.766 00:08:52.766 real 0m0.850s 00:08:52.766 user 0m0.455s 00:08:52.766 sys 0m0.360s 00:08:52.766 11:52:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:52.766 11:52:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:08:52.766 ************************************ 00:08:52.766 END TEST env_vtophys 00:08:52.766 ************************************ 00:08:53.027 11:52:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:53.027 11:52:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:53.027 11:52:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.027 11:52:17 env -- common/autotest_common.sh@10 -- # set +x 00:08:53.027 ************************************ 00:08:53.027 START TEST env_pci 00:08:53.028 ************************************ 00:08:53.028 11:52:17 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:08:53.028 00:08:53.028 00:08:53.028 CUnit - A unit testing framework for C - Version 2.1-3 00:08:53.028 http://cunit.sourceforge.net/ 00:08:53.028 00:08:53.028 00:08:53.028 Suite: pci 00:08:53.028 Test: pci_hook ...[2024-12-05 11:52:17.882893] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1119472 has claimed it 00:08:53.028 EAL: Cannot find device (10000:00:01.0) 00:08:53.028 EAL: Failed to attach device on primary process 00:08:53.028 passed 00:08:53.028 00:08:53.028 Run Summary: Type Total Ran Passed Failed Inactive 00:08:53.028 suites 1 1 n/a 0 0 00:08:53.028 tests 1 1 1 0 0 00:08:53.028 asserts 25 25 25 0 n/a 00:08:53.028 00:08:53.028 Elapsed time = 0.031 seconds 00:08:53.028 00:08:53.028 real 0m0.053s 00:08:53.028 user 0m0.014s 00:08:53.028 sys 0m0.038s 00:08:53.028 11:52:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:53.028 11:52:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:08:53.028 ************************************ 00:08:53.028 END TEST env_pci 00:08:53.028 ************************************ 00:08:53.028 11:52:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:08:53.028 11:52:17 env -- env/env.sh@15 -- # uname 00:08:53.028 11:52:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:08:53.028 11:52:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:08:53.028 11:52:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:53.028 11:52:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:53.028 11:52:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:53.028 11:52:17 env -- common/autotest_common.sh@10 -- # set +x 00:08:53.028 ************************************ 00:08:53.028 START TEST env_dpdk_post_init 00:08:53.028 ************************************ 00:08:53.028 11:52:18 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:08:53.028 EAL: Detected CPU lcores: 128 00:08:53.028 EAL: Detected NUMA nodes: 2 00:08:53.028 EAL: Detected shared linkage of DPDK 00:08:53.028 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:53.028 EAL: Selected IOVA mode 'VA' 00:08:53.028 EAL: VFIO support initialized 00:08:53.028 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:53.289 EAL: Using IOMMU type 1 (Type 1) 00:08:53.289 EAL: Ignore mapping IO port bar(1) 00:08:53.550 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:08:53.550 EAL: Ignore mapping IO port bar(1) 00:08:53.550 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:08:53.811 EAL: Ignore mapping IO port bar(1) 00:08:53.811 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:08:54.072 EAL: Ignore mapping IO port bar(1) 00:08:54.072 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:08:54.333 EAL: Ignore mapping IO port bar(1) 00:08:54.333 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:08:54.594 EAL: Ignore mapping IO port bar(1) 00:08:54.594 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:08:54.594 EAL: Ignore mapping IO port bar(1) 00:08:54.854 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:08:54.854 EAL: Ignore mapping IO port bar(1) 00:08:55.115 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:08:55.115 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:08:55.375 EAL: Ignore mapping IO port bar(1) 00:08:55.375 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:08:55.636 EAL: Ignore mapping IO port bar(1) 00:08:55.636 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:08:55.898 EAL: Ignore mapping IO port bar(1) 00:08:55.898 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:08:56.159 EAL: Ignore mapping IO port bar(1) 00:08:56.159 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:08:56.159 EAL: Ignore mapping IO port bar(1) 00:08:56.420 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:08:56.420 EAL: Ignore mapping IO port bar(1) 00:08:56.680 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:08:56.680 EAL: Ignore mapping IO port bar(1) 00:08:56.941 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:08:56.941 EAL: Ignore mapping IO port bar(1) 00:08:56.941 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:08:56.941 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:08:56.941 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:08:57.200 Starting DPDK initialization... 00:08:57.200 Starting SPDK post initialization... 00:08:57.200 SPDK NVMe probe 00:08:57.200 Attaching to 0000:65:00.0 00:08:57.200 Attached to 0000:65:00.0 00:08:57.200 Cleaning up... 00:08:59.117 00:08:59.117 real 0m5.745s 00:08:59.117 user 0m0.107s 00:08:59.117 sys 0m0.195s 00:08:59.117 11:52:23 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.117 11:52:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:08:59.117 ************************************ 00:08:59.117 END TEST env_dpdk_post_init 00:08:59.117 ************************************ 00:08:59.117 11:52:23 env -- env/env.sh@26 -- # uname 00:08:59.117 11:52:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:08:59.117 11:52:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:59.117 11:52:23 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.117 11:52:23 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.117 11:52:23 env -- common/autotest_common.sh@10 -- # set +x 00:08:59.117 ************************************ 00:08:59.117 START TEST env_mem_callbacks 00:08:59.117 ************************************ 00:08:59.117 11:52:23 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:08:59.117 EAL: Detected CPU lcores: 128 00:08:59.117 EAL: Detected NUMA nodes: 2 00:08:59.117 EAL: Detected shared linkage of DPDK 00:08:59.117 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:08:59.117 EAL: Selected IOVA mode 'VA' 00:08:59.117 EAL: VFIO support initialized 00:08:59.117 TELEMETRY: No legacy callbacks, legacy socket not created 00:08:59.117 00:08:59.117 00:08:59.117 CUnit - A unit testing framework for C - Version 2.1-3 00:08:59.117 http://cunit.sourceforge.net/ 00:08:59.117 00:08:59.117 00:08:59.117 Suite: memory 00:08:59.117 Test: test ... 00:08:59.117 register 0x200000200000 2097152 00:08:59.117 malloc 3145728 00:08:59.117 register 0x200000400000 4194304 00:08:59.117 buf 0x200000500000 len 3145728 PASSED 00:08:59.117 malloc 64 00:08:59.117 buf 0x2000004fff40 len 64 PASSED 00:08:59.117 malloc 4194304 00:08:59.117 register 0x200000800000 6291456 00:08:59.117 buf 0x200000a00000 len 4194304 PASSED 00:08:59.117 free 0x200000500000 3145728 00:08:59.117 free 0x2000004fff40 64 00:08:59.117 unregister 0x200000400000 4194304 PASSED 00:08:59.117 free 0x200000a00000 4194304 00:08:59.117 unregister 0x200000800000 6291456 PASSED 00:08:59.117 malloc 8388608 00:08:59.117 register 0x200000400000 10485760 00:08:59.117 buf 0x200000600000 len 8388608 PASSED 00:08:59.117 free 0x200000600000 8388608 00:08:59.117 unregister 0x200000400000 10485760 PASSED 00:08:59.117 passed 00:08:59.117 00:08:59.117 Run Summary: Type Total Ran Passed Failed Inactive 00:08:59.117 suites 1 1 n/a 0 0 00:08:59.117 tests 1 1 1 0 0 00:08:59.117 asserts 15 15 15 0 n/a 00:08:59.117 00:08:59.117 Elapsed time = 0.011 seconds 00:08:59.117 00:08:59.117 real 0m0.071s 00:08:59.117 user 0m0.027s 00:08:59.117 sys 0m0.045s 00:08:59.117 11:52:23 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.117 11:52:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:08:59.117 ************************************ 00:08:59.117 END TEST env_mem_callbacks 00:08:59.117 ************************************ 00:08:59.117 00:08:59.117 real 0m7.547s 00:08:59.117 user 0m1.078s 00:08:59.117 sys 0m1.027s 00:08:59.117 11:52:23 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.117 11:52:23 env -- common/autotest_common.sh@10 -- # set +x 00:08:59.117 ************************************ 00:08:59.117 END TEST env 00:08:59.117 ************************************ 00:08:59.117 11:52:23 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:59.117 11:52:23 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.117 11:52:23 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.117 11:52:23 -- common/autotest_common.sh@10 -- # set +x 00:08:59.117 ************************************ 00:08:59.117 START TEST rpc 00:08:59.117 ************************************ 00:08:59.117 11:52:24 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:08:59.117 * Looking for test storage... 00:08:59.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:08:59.117 11:52:24 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.117 11:52:24 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.117 11:52:24 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.378 11:52:24 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.378 11:52:24 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.378 11:52:24 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.378 11:52:24 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.378 11:52:24 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.378 11:52:24 rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:59.378 11:52:24 rpc -- scripts/common.sh@345 -- # : 1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.378 11:52:24 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.378 11:52:24 rpc -- scripts/common.sh@365 -- # decimal 1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@353 -- # local d=1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.378 11:52:24 rpc -- scripts/common.sh@355 -- # echo 1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.378 11:52:24 rpc -- scripts/common.sh@366 -- # decimal 2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@353 -- # local d=2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.378 11:52:24 rpc -- scripts/common.sh@355 -- # echo 2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.378 11:52:24 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.378 11:52:24 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.378 11:52:24 rpc -- scripts/common.sh@368 -- # return 0 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.378 --rc genhtml_branch_coverage=1 00:08:59.378 --rc genhtml_function_coverage=1 00:08:59.378 --rc genhtml_legend=1 00:08:59.378 --rc geninfo_all_blocks=1 00:08:59.378 --rc geninfo_unexecuted_blocks=1 00:08:59.378 00:08:59.378 ' 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.378 --rc genhtml_branch_coverage=1 00:08:59.378 --rc genhtml_function_coverage=1 00:08:59.378 --rc genhtml_legend=1 00:08:59.378 --rc geninfo_all_blocks=1 00:08:59.378 --rc geninfo_unexecuted_blocks=1 00:08:59.378 00:08:59.378 ' 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.378 --rc genhtml_branch_coverage=1 00:08:59.378 --rc genhtml_function_coverage=1 00:08:59.378 --rc genhtml_legend=1 00:08:59.378 --rc geninfo_all_blocks=1 00:08:59.378 --rc geninfo_unexecuted_blocks=1 00:08:59.378 00:08:59.378 ' 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.378 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.378 --rc genhtml_branch_coverage=1 00:08:59.378 --rc genhtml_function_coverage=1 00:08:59.378 --rc genhtml_legend=1 00:08:59.378 --rc geninfo_all_blocks=1 00:08:59.378 --rc geninfo_unexecuted_blocks=1 00:08:59.378 00:08:59.378 ' 00:08:59.378 11:52:24 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1120893 00:08:59.378 11:52:24 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:59.378 11:52:24 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1120893 00:08:59.378 11:52:24 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@835 -- # '[' -z 1120893 ']' 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:59.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:59.378 11:52:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:08:59.378 [2024-12-05 11:52:24.303803] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:08:59.378 [2024-12-05 11:52:24.303874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120893 ] 00:08:59.378 [2024-12-05 11:52:24.396980] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.685 [2024-12-05 11:52:24.449926] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:08:59.685 [2024-12-05 11:52:24.449988] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1120893' to capture a snapshot of events at runtime. 00:08:59.685 [2024-12-05 11:52:24.449996] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:59.685 [2024-12-05 11:52:24.450004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:59.685 [2024-12-05 11:52:24.450010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1120893 for offline analysis/debug. 00:08:59.685 [2024-12-05 11:52:24.450782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.292 11:52:25 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.292 11:52:25 rpc -- common/autotest_common.sh@868 -- # return 0 00:09:00.292 11:52:25 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:00.292 11:52:25 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:00.292 11:52:25 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:09:00.292 11:52:25 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:09:00.292 11:52:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.292 11:52:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.292 11:52:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.292 ************************************ 00:09:00.292 START TEST rpc_integrity 00:09:00.292 ************************************ 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.292 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.292 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:00.292 { 00:09:00.292 "name": "Malloc0", 00:09:00.292 "aliases": [ 00:09:00.292 "494ba542-b80f-4e00-a416-ea5d24ba31ba" 00:09:00.292 ], 00:09:00.292 "product_name": "Malloc disk", 00:09:00.292 "block_size": 512, 00:09:00.293 "num_blocks": 16384, 00:09:00.293 "uuid": "494ba542-b80f-4e00-a416-ea5d24ba31ba", 00:09:00.293 "assigned_rate_limits": { 00:09:00.293 "rw_ios_per_sec": 0, 00:09:00.293 "rw_mbytes_per_sec": 0, 00:09:00.293 "r_mbytes_per_sec": 0, 00:09:00.293 "w_mbytes_per_sec": 0 00:09:00.293 }, 00:09:00.293 "claimed": false, 00:09:00.293 "zoned": false, 00:09:00.293 "supported_io_types": { 00:09:00.293 "read": true, 00:09:00.293 "write": true, 00:09:00.293 "unmap": true, 00:09:00.293 "flush": true, 00:09:00.293 "reset": true, 00:09:00.293 "nvme_admin": false, 00:09:00.293 "nvme_io": false, 00:09:00.293 "nvme_io_md": false, 00:09:00.293 "write_zeroes": true, 00:09:00.293 "zcopy": true, 00:09:00.293 "get_zone_info": false, 00:09:00.293 "zone_management": false, 00:09:00.293 "zone_append": false, 00:09:00.293 "compare": false, 00:09:00.293 "compare_and_write": false, 00:09:00.293 "abort": true, 00:09:00.293 "seek_hole": false, 00:09:00.293 "seek_data": false, 00:09:00.293 "copy": true, 00:09:00.293 "nvme_iov_md": false 00:09:00.293 }, 00:09:00.293 "memory_domains": [ 00:09:00.293 { 00:09:00.293 "dma_device_id": "system", 00:09:00.293 "dma_device_type": 1 00:09:00.293 }, 00:09:00.293 { 00:09:00.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.293 "dma_device_type": 2 00:09:00.293 } 00:09:00.293 ], 00:09:00.293 "driver_specific": {} 00:09:00.293 } 00:09:00.293 ]' 00:09:00.293 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:00.293 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:00.293 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:09:00.293 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.293 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.293 [2024-12-05 11:52:25.300168] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:09:00.293 [2024-12-05 11:52:25.300218] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:00.293 [2024-12-05 11:52:25.300235] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7d2ae0 00:09:00.293 [2024-12-05 11:52:25.300243] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:00.293 [2024-12-05 11:52:25.301861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:00.293 [2024-12-05 11:52:25.301897] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:00.293 Passthru0 00:09:00.293 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.293 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:00.293 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.293 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.293 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.293 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:00.293 { 00:09:00.293 "name": "Malloc0", 00:09:00.293 "aliases": [ 00:09:00.293 "494ba542-b80f-4e00-a416-ea5d24ba31ba" 00:09:00.293 ], 00:09:00.293 "product_name": "Malloc disk", 00:09:00.293 "block_size": 512, 00:09:00.293 "num_blocks": 16384, 00:09:00.293 "uuid": "494ba542-b80f-4e00-a416-ea5d24ba31ba", 00:09:00.293 "assigned_rate_limits": { 00:09:00.293 "rw_ios_per_sec": 0, 00:09:00.293 "rw_mbytes_per_sec": 0, 00:09:00.293 "r_mbytes_per_sec": 0, 00:09:00.293 "w_mbytes_per_sec": 0 00:09:00.293 }, 00:09:00.293 "claimed": true, 00:09:00.293 "claim_type": "exclusive_write", 00:09:00.293 "zoned": false, 00:09:00.293 "supported_io_types": { 00:09:00.293 "read": true, 00:09:00.293 "write": true, 00:09:00.293 "unmap": true, 00:09:00.293 "flush": true, 00:09:00.293 "reset": true, 00:09:00.293 "nvme_admin": false, 00:09:00.293 "nvme_io": false, 00:09:00.293 "nvme_io_md": false, 00:09:00.293 "write_zeroes": true, 00:09:00.293 "zcopy": true, 00:09:00.293 "get_zone_info": false, 00:09:00.293 "zone_management": false, 00:09:00.293 "zone_append": false, 00:09:00.293 "compare": false, 00:09:00.293 "compare_and_write": false, 00:09:00.293 "abort": true, 00:09:00.293 "seek_hole": false, 00:09:00.293 "seek_data": false, 00:09:00.293 "copy": true, 00:09:00.293 "nvme_iov_md": false 00:09:00.293 }, 00:09:00.293 "memory_domains": [ 00:09:00.293 { 00:09:00.293 "dma_device_id": "system", 00:09:00.293 "dma_device_type": 1 00:09:00.293 }, 00:09:00.293 { 00:09:00.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.293 "dma_device_type": 2 00:09:00.293 } 00:09:00.293 ], 00:09:00.293 "driver_specific": {} 00:09:00.293 }, 00:09:00.293 { 00:09:00.293 "name": "Passthru0", 00:09:00.293 "aliases": [ 00:09:00.293 "42ea1009-71fa-5639-a9ce-a7a9d1761de4" 00:09:00.293 ], 00:09:00.293 "product_name": "passthru", 00:09:00.293 "block_size": 512, 00:09:00.293 "num_blocks": 16384, 00:09:00.293 "uuid": "42ea1009-71fa-5639-a9ce-a7a9d1761de4", 00:09:00.293 "assigned_rate_limits": { 00:09:00.293 "rw_ios_per_sec": 0, 00:09:00.293 "rw_mbytes_per_sec": 0, 00:09:00.293 "r_mbytes_per_sec": 0, 00:09:00.293 "w_mbytes_per_sec": 0 00:09:00.293 }, 00:09:00.293 "claimed": false, 00:09:00.293 "zoned": false, 00:09:00.293 "supported_io_types": { 00:09:00.293 "read": true, 00:09:00.293 "write": true, 00:09:00.293 "unmap": true, 00:09:00.293 "flush": true, 00:09:00.293 "reset": true, 00:09:00.293 "nvme_admin": false, 00:09:00.293 "nvme_io": false, 00:09:00.293 "nvme_io_md": false, 00:09:00.293 "write_zeroes": true, 00:09:00.293 "zcopy": true, 00:09:00.293 "get_zone_info": false, 00:09:00.293 "zone_management": false, 00:09:00.293 "zone_append": false, 00:09:00.293 "compare": false, 00:09:00.293 "compare_and_write": false, 00:09:00.293 "abort": true, 00:09:00.293 "seek_hole": false, 00:09:00.293 "seek_data": false, 00:09:00.293 "copy": true, 00:09:00.293 "nvme_iov_md": false 00:09:00.293 }, 00:09:00.293 "memory_domains": [ 00:09:00.293 { 00:09:00.293 "dma_device_id": "system", 00:09:00.293 "dma_device_type": 1 00:09:00.293 }, 00:09:00.293 { 00:09:00.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.293 "dma_device_type": 2 00:09:00.293 } 00:09:00.293 ], 00:09:00.293 "driver_specific": { 00:09:00.293 "passthru": { 00:09:00.293 "name": "Passthru0", 00:09:00.293 "base_bdev_name": "Malloc0" 00:09:00.293 } 00:09:00.293 } 00:09:00.293 } 00:09:00.293 ]' 00:09:00.293 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:00.553 11:52:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:00.553 00:09:00.553 real 0m0.292s 00:09:00.553 user 0m0.180s 00:09:00.553 sys 0m0.045s 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.553 11:52:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 ************************************ 00:09:00.553 END TEST rpc_integrity 00:09:00.553 ************************************ 00:09:00.553 11:52:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:09:00.553 11:52:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.553 11:52:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.553 11:52:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 ************************************ 00:09:00.553 START TEST rpc_plugins 00:09:00.553 ************************************ 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:09:00.553 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.553 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:09:00.553 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:00.553 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.553 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:09:00.553 { 00:09:00.553 "name": "Malloc1", 00:09:00.553 "aliases": [ 00:09:00.553 "d288989d-1b2e-4e21-a3bd-b48403d34176" 00:09:00.553 ], 00:09:00.553 "product_name": "Malloc disk", 00:09:00.553 "block_size": 4096, 00:09:00.553 "num_blocks": 256, 00:09:00.553 "uuid": "d288989d-1b2e-4e21-a3bd-b48403d34176", 00:09:00.553 "assigned_rate_limits": { 00:09:00.553 "rw_ios_per_sec": 0, 00:09:00.553 "rw_mbytes_per_sec": 0, 00:09:00.553 "r_mbytes_per_sec": 0, 00:09:00.553 "w_mbytes_per_sec": 0 00:09:00.553 }, 00:09:00.553 "claimed": false, 00:09:00.553 "zoned": false, 00:09:00.553 "supported_io_types": { 00:09:00.553 "read": true, 00:09:00.553 "write": true, 00:09:00.553 "unmap": true, 00:09:00.553 "flush": true, 00:09:00.553 "reset": true, 00:09:00.553 "nvme_admin": false, 00:09:00.553 "nvme_io": false, 00:09:00.553 "nvme_io_md": false, 00:09:00.553 "write_zeroes": true, 00:09:00.553 "zcopy": true, 00:09:00.553 "get_zone_info": false, 00:09:00.554 "zone_management": false, 00:09:00.554 "zone_append": false, 00:09:00.554 "compare": false, 00:09:00.554 "compare_and_write": false, 00:09:00.554 "abort": true, 00:09:00.554 "seek_hole": false, 00:09:00.554 "seek_data": false, 00:09:00.554 "copy": true, 00:09:00.554 "nvme_iov_md": false 00:09:00.554 }, 00:09:00.554 "memory_domains": [ 00:09:00.554 { 00:09:00.554 "dma_device_id": "system", 00:09:00.554 "dma_device_type": 1 00:09:00.554 }, 00:09:00.554 { 00:09:00.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:00.554 "dma_device_type": 2 00:09:00.554 } 00:09:00.554 ], 00:09:00.554 "driver_specific": {} 00:09:00.554 } 00:09:00.554 ]' 00:09:00.554 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:09:00.814 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:09:00.814 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.814 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.814 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:09:00.814 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:09:00.814 11:52:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:09:00.814 00:09:00.814 real 0m0.159s 00:09:00.814 user 0m0.099s 00:09:00.814 sys 0m0.022s 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.814 11:52:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:09:00.814 ************************************ 00:09:00.814 END TEST rpc_plugins 00:09:00.814 ************************************ 00:09:00.814 11:52:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:09:00.814 11:52:25 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:00.814 11:52:25 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.814 11:52:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:00.814 ************************************ 00:09:00.814 START TEST rpc_trace_cmd_test 00:09:00.814 ************************************ 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:09:00.814 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1120893", 00:09:00.814 "tpoint_group_mask": "0x8", 00:09:00.814 "iscsi_conn": { 00:09:00.814 "mask": "0x2", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "scsi": { 00:09:00.814 "mask": "0x4", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "bdev": { 00:09:00.814 "mask": "0x8", 00:09:00.814 "tpoint_mask": "0xffffffffffffffff" 00:09:00.814 }, 00:09:00.814 "nvmf_rdma": { 00:09:00.814 "mask": "0x10", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "nvmf_tcp": { 00:09:00.814 "mask": "0x20", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "ftl": { 00:09:00.814 "mask": "0x40", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "blobfs": { 00:09:00.814 "mask": "0x80", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "dsa": { 00:09:00.814 "mask": "0x200", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "thread": { 00:09:00.814 "mask": "0x400", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "nvme_pcie": { 00:09:00.814 "mask": "0x800", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "iaa": { 00:09:00.814 "mask": "0x1000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "nvme_tcp": { 00:09:00.814 "mask": "0x2000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "bdev_nvme": { 00:09:00.814 "mask": "0x4000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "sock": { 00:09:00.814 "mask": "0x8000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "blob": { 00:09:00.814 "mask": "0x10000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "bdev_raid": { 00:09:00.814 "mask": "0x20000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 }, 00:09:00.814 "scheduler": { 00:09:00.814 "mask": "0x40000", 00:09:00.814 "tpoint_mask": "0x0" 00:09:00.814 } 00:09:00.814 }' 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:09:00.814 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:09:01.074 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:09:01.075 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:09:01.075 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:09:01.075 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:09:01.075 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:09:01.075 11:52:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:09:01.075 11:52:26 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:09:01.075 00:09:01.075 real 0m0.235s 00:09:01.075 user 0m0.192s 00:09:01.075 sys 0m0.034s 00:09:01.075 11:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.075 11:52:26 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.075 ************************************ 00:09:01.075 END TEST rpc_trace_cmd_test 00:09:01.075 ************************************ 00:09:01.075 11:52:26 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:09:01.075 11:52:26 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:09:01.075 11:52:26 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:09:01.075 11:52:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.075 11:52:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.075 11:52:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.075 ************************************ 00:09:01.075 START TEST rpc_daemon_integrity 00:09:01.075 ************************************ 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:09:01.075 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:09:01.335 { 00:09:01.335 "name": "Malloc2", 00:09:01.335 "aliases": [ 00:09:01.335 "39e0a49f-3cc8-4de9-9fc3-9909a93952a9" 00:09:01.335 ], 00:09:01.335 "product_name": "Malloc disk", 00:09:01.335 "block_size": 512, 00:09:01.335 "num_blocks": 16384, 00:09:01.335 "uuid": "39e0a49f-3cc8-4de9-9fc3-9909a93952a9", 00:09:01.335 "assigned_rate_limits": { 00:09:01.335 "rw_ios_per_sec": 0, 00:09:01.335 "rw_mbytes_per_sec": 0, 00:09:01.335 "r_mbytes_per_sec": 0, 00:09:01.335 "w_mbytes_per_sec": 0 00:09:01.335 }, 00:09:01.335 "claimed": false, 00:09:01.335 "zoned": false, 00:09:01.335 "supported_io_types": { 00:09:01.335 "read": true, 00:09:01.335 "write": true, 00:09:01.335 "unmap": true, 00:09:01.335 "flush": true, 00:09:01.335 "reset": true, 00:09:01.335 "nvme_admin": false, 00:09:01.335 "nvme_io": false, 00:09:01.335 "nvme_io_md": false, 00:09:01.335 "write_zeroes": true, 00:09:01.335 "zcopy": true, 00:09:01.335 "get_zone_info": false, 00:09:01.335 "zone_management": false, 00:09:01.335 "zone_append": false, 00:09:01.335 "compare": false, 00:09:01.335 "compare_and_write": false, 00:09:01.335 "abort": true, 00:09:01.335 "seek_hole": false, 00:09:01.335 "seek_data": false, 00:09:01.335 "copy": true, 00:09:01.335 "nvme_iov_md": false 00:09:01.335 }, 00:09:01.335 "memory_domains": [ 00:09:01.335 { 00:09:01.335 "dma_device_id": "system", 00:09:01.335 "dma_device_type": 1 00:09:01.335 }, 00:09:01.335 { 00:09:01.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.335 "dma_device_type": 2 00:09:01.335 } 00:09:01.335 ], 00:09:01.335 "driver_specific": {} 00:09:01.335 } 00:09:01.335 ]' 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.335 [2024-12-05 11:52:26.242716] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:09:01.335 [2024-12-05 11:52:26.242759] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.335 [2024-12-05 11:52:26.242777] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7d3040 00:09:01.335 [2024-12-05 11:52:26.242785] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.335 [2024-12-05 11:52:26.244268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.335 [2024-12-05 11:52:26.244305] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:09:01.335 Passthru0 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.335 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:09:01.335 { 00:09:01.335 "name": "Malloc2", 00:09:01.335 "aliases": [ 00:09:01.335 "39e0a49f-3cc8-4de9-9fc3-9909a93952a9" 00:09:01.335 ], 00:09:01.335 "product_name": "Malloc disk", 00:09:01.335 "block_size": 512, 00:09:01.335 "num_blocks": 16384, 00:09:01.335 "uuid": "39e0a49f-3cc8-4de9-9fc3-9909a93952a9", 00:09:01.335 "assigned_rate_limits": { 00:09:01.335 "rw_ios_per_sec": 0, 00:09:01.335 "rw_mbytes_per_sec": 0, 00:09:01.335 "r_mbytes_per_sec": 0, 00:09:01.335 "w_mbytes_per_sec": 0 00:09:01.335 }, 00:09:01.335 "claimed": true, 00:09:01.335 "claim_type": "exclusive_write", 00:09:01.335 "zoned": false, 00:09:01.335 "supported_io_types": { 00:09:01.335 "read": true, 00:09:01.335 "write": true, 00:09:01.335 "unmap": true, 00:09:01.335 "flush": true, 00:09:01.335 "reset": true, 00:09:01.335 "nvme_admin": false, 00:09:01.335 "nvme_io": false, 00:09:01.335 "nvme_io_md": false, 00:09:01.335 "write_zeroes": true, 00:09:01.335 "zcopy": true, 00:09:01.335 "get_zone_info": false, 00:09:01.335 "zone_management": false, 00:09:01.335 "zone_append": false, 00:09:01.335 "compare": false, 00:09:01.335 "compare_and_write": false, 00:09:01.335 "abort": true, 00:09:01.335 "seek_hole": false, 00:09:01.335 "seek_data": false, 00:09:01.335 "copy": true, 00:09:01.336 "nvme_iov_md": false 00:09:01.336 }, 00:09:01.336 "memory_domains": [ 00:09:01.336 { 00:09:01.336 "dma_device_id": "system", 00:09:01.336 "dma_device_type": 1 00:09:01.336 }, 00:09:01.336 { 00:09:01.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.336 "dma_device_type": 2 00:09:01.336 } 00:09:01.336 ], 00:09:01.336 "driver_specific": {} 00:09:01.336 }, 00:09:01.336 { 00:09:01.336 "name": "Passthru0", 00:09:01.336 "aliases": [ 00:09:01.336 "1922f3d4-3ab4-536f-b61f-5d8044eb0986" 00:09:01.336 ], 00:09:01.336 "product_name": "passthru", 00:09:01.336 "block_size": 512, 00:09:01.336 "num_blocks": 16384, 00:09:01.336 "uuid": "1922f3d4-3ab4-536f-b61f-5d8044eb0986", 00:09:01.336 "assigned_rate_limits": { 00:09:01.336 "rw_ios_per_sec": 0, 00:09:01.336 "rw_mbytes_per_sec": 0, 00:09:01.336 "r_mbytes_per_sec": 0, 00:09:01.336 "w_mbytes_per_sec": 0 00:09:01.336 }, 00:09:01.336 "claimed": false, 00:09:01.336 "zoned": false, 00:09:01.336 "supported_io_types": { 00:09:01.336 "read": true, 00:09:01.336 "write": true, 00:09:01.336 "unmap": true, 00:09:01.336 "flush": true, 00:09:01.336 "reset": true, 00:09:01.336 "nvme_admin": false, 00:09:01.336 "nvme_io": false, 00:09:01.336 "nvme_io_md": false, 00:09:01.336 "write_zeroes": true, 00:09:01.336 "zcopy": true, 00:09:01.336 "get_zone_info": false, 00:09:01.336 "zone_management": false, 00:09:01.336 "zone_append": false, 00:09:01.336 "compare": false, 00:09:01.336 "compare_and_write": false, 00:09:01.336 "abort": true, 00:09:01.336 "seek_hole": false, 00:09:01.336 "seek_data": false, 00:09:01.336 "copy": true, 00:09:01.336 "nvme_iov_md": false 00:09:01.336 }, 00:09:01.336 "memory_domains": [ 00:09:01.336 { 00:09:01.336 "dma_device_id": "system", 00:09:01.336 "dma_device_type": 1 00:09:01.336 }, 00:09:01.336 { 00:09:01.336 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:01.336 "dma_device_type": 2 00:09:01.336 } 00:09:01.336 ], 00:09:01.336 "driver_specific": { 00:09:01.336 "passthru": { 00:09:01.336 "name": "Passthru0", 00:09:01.336 "base_bdev_name": "Malloc2" 00:09:01.336 } 00:09:01.336 } 00:09:01.336 } 00:09:01.336 ]' 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:09:01.336 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:09:01.596 11:52:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:09:01.596 00:09:01.596 real 0m0.308s 00:09:01.596 user 0m0.193s 00:09:01.596 sys 0m0.047s 00:09:01.596 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.596 11:52:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:09:01.596 ************************************ 00:09:01.596 END TEST rpc_daemon_integrity 00:09:01.596 ************************************ 00:09:01.596 11:52:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:09:01.596 11:52:26 rpc -- rpc/rpc.sh@84 -- # killprocess 1120893 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@954 -- # '[' -z 1120893 ']' 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@958 -- # kill -0 1120893 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@959 -- # uname 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120893 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120893' 00:09:01.596 killing process with pid 1120893 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@973 -- # kill 1120893 00:09:01.596 11:52:26 rpc -- common/autotest_common.sh@978 -- # wait 1120893 00:09:01.857 00:09:01.857 real 0m2.721s 00:09:01.857 user 0m3.451s 00:09:01.857 sys 0m0.855s 00:09:01.857 11:52:26 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:01.857 11:52:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:09:01.857 ************************************ 00:09:01.857 END TEST rpc 00:09:01.857 ************************************ 00:09:01.857 11:52:26 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:01.857 11:52:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:01.857 11:52:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:01.857 11:52:26 -- common/autotest_common.sh@10 -- # set +x 00:09:01.857 ************************************ 00:09:01.857 START TEST skip_rpc 00:09:01.857 ************************************ 00:09:01.857 11:52:26 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:09:02.118 * Looking for test storage... 00:09:02.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:09:02.118 11:52:26 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:02.118 11:52:26 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:02.118 11:52:26 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@345 -- # : 1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:02.118 11:52:27 skip_rpc -- scripts/common.sh@368 -- # return 0 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.118 --rc genhtml_branch_coverage=1 00:09:02.118 --rc genhtml_function_coverage=1 00:09:02.118 --rc genhtml_legend=1 00:09:02.118 --rc geninfo_all_blocks=1 00:09:02.118 --rc geninfo_unexecuted_blocks=1 00:09:02.118 00:09:02.118 ' 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.118 --rc genhtml_branch_coverage=1 00:09:02.118 --rc genhtml_function_coverage=1 00:09:02.118 --rc genhtml_legend=1 00:09:02.118 --rc geninfo_all_blocks=1 00:09:02.118 --rc geninfo_unexecuted_blocks=1 00:09:02.118 00:09:02.118 ' 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.118 --rc genhtml_branch_coverage=1 00:09:02.118 --rc genhtml_function_coverage=1 00:09:02.118 --rc genhtml_legend=1 00:09:02.118 --rc geninfo_all_blocks=1 00:09:02.118 --rc geninfo_unexecuted_blocks=1 00:09:02.118 00:09:02.118 ' 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:02.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:02.118 --rc genhtml_branch_coverage=1 00:09:02.118 --rc genhtml_function_coverage=1 00:09:02.118 --rc genhtml_legend=1 00:09:02.118 --rc geninfo_all_blocks=1 00:09:02.118 --rc geninfo_unexecuted_blocks=1 00:09:02.118 00:09:02.118 ' 00:09:02.118 11:52:27 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:02.118 11:52:27 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:02.118 11:52:27 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:02.118 11:52:27 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:02.118 ************************************ 00:09:02.118 START TEST skip_rpc 00:09:02.118 ************************************ 00:09:02.118 11:52:27 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:09:02.118 11:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1121472 00:09:02.118 11:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:02.118 11:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:09:02.118 11:52:27 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:09:02.118 [2024-12-05 11:52:27.139729] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:02.118 [2024-12-05 11:52:27.139795] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121472 ] 00:09:02.380 [2024-12-05 11:52:27.233066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.380 [2024-12-05 11:52:27.286695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1121472 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1121472 ']' 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1121472 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1121472 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1121472' 00:09:07.680 killing process with pid 1121472 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1121472 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1121472 00:09:07.680 00:09:07.680 real 0m5.265s 00:09:07.680 user 0m4.999s 00:09:07.680 sys 0m0.312s 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.680 11:52:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.680 ************************************ 00:09:07.680 END TEST skip_rpc 00:09:07.680 ************************************ 00:09:07.680 11:52:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:09:07.680 11:52:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.680 11:52:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.680 11:52:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.680 ************************************ 00:09:07.680 START TEST skip_rpc_with_json 00:09:07.680 ************************************ 00:09:07.680 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:09:07.680 11:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1122630 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1122630 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1122630 ']' 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.681 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:07.681 11:52:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:07.681 [2024-12-05 11:52:32.480479] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:07.681 [2024-12-05 11:52:32.480536] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122630 ] 00:09:07.681 [2024-12-05 11:52:32.570585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.681 [2024-12-05 11:52:32.610954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:08.252 [2024-12-05 11:52:33.292299] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:09:08.252 request: 00:09:08.252 { 00:09:08.252 "trtype": "tcp", 00:09:08.252 "method": "nvmf_get_transports", 00:09:08.252 "req_id": 1 00:09:08.252 } 00:09:08.252 Got JSON-RPC error response 00:09:08.252 response: 00:09:08.252 { 00:09:08.252 "code": -19, 00:09:08.252 "message": "No such device" 00:09:08.252 } 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.252 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:08.512 [2024-12-05 11:52:33.304395] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:08.512 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.512 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:09:08.512 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.512 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:08.512 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.512 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:08.512 { 00:09:08.512 "subsystems": [ 00:09:08.512 { 00:09:08.512 "subsystem": "fsdev", 00:09:08.512 "config": [ 00:09:08.512 { 00:09:08.512 "method": "fsdev_set_opts", 00:09:08.512 "params": { 00:09:08.512 "fsdev_io_pool_size": 65535, 00:09:08.512 "fsdev_io_cache_size": 256 00:09:08.512 } 00:09:08.512 } 00:09:08.512 ] 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "vfio_user_target", 00:09:08.512 "config": null 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "keyring", 00:09:08.512 "config": [] 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "iobuf", 00:09:08.512 "config": [ 00:09:08.512 { 00:09:08.512 "method": "iobuf_set_options", 00:09:08.512 "params": { 00:09:08.512 "small_pool_count": 8192, 00:09:08.512 "large_pool_count": 1024, 00:09:08.512 "small_bufsize": 8192, 00:09:08.512 "large_bufsize": 135168, 00:09:08.512 "enable_numa": false 00:09:08.512 } 00:09:08.512 } 00:09:08.512 ] 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "sock", 00:09:08.512 "config": [ 00:09:08.512 { 00:09:08.512 "method": "sock_set_default_impl", 00:09:08.512 "params": { 00:09:08.512 "impl_name": "posix" 00:09:08.512 } 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "method": "sock_impl_set_options", 00:09:08.512 "params": { 00:09:08.512 "impl_name": "ssl", 00:09:08.512 "recv_buf_size": 4096, 00:09:08.512 "send_buf_size": 4096, 00:09:08.512 "enable_recv_pipe": true, 00:09:08.512 "enable_quickack": false, 00:09:08.512 "enable_placement_id": 0, 00:09:08.512 "enable_zerocopy_send_server": true, 00:09:08.512 "enable_zerocopy_send_client": false, 00:09:08.512 "zerocopy_threshold": 0, 00:09:08.512 "tls_version": 0, 00:09:08.512 "enable_ktls": false 00:09:08.512 } 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "method": "sock_impl_set_options", 00:09:08.512 "params": { 00:09:08.512 "impl_name": "posix", 00:09:08.512 "recv_buf_size": 2097152, 00:09:08.512 "send_buf_size": 2097152, 00:09:08.512 "enable_recv_pipe": true, 00:09:08.512 "enable_quickack": false, 00:09:08.512 "enable_placement_id": 0, 00:09:08.512 "enable_zerocopy_send_server": true, 00:09:08.512 "enable_zerocopy_send_client": false, 00:09:08.512 "zerocopy_threshold": 0, 00:09:08.512 "tls_version": 0, 00:09:08.512 "enable_ktls": false 00:09:08.512 } 00:09:08.512 } 00:09:08.512 ] 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "vmd", 00:09:08.512 "config": [] 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "accel", 00:09:08.512 "config": [ 00:09:08.512 { 00:09:08.512 "method": "accel_set_options", 00:09:08.512 "params": { 00:09:08.512 "small_cache_size": 128, 00:09:08.512 "large_cache_size": 16, 00:09:08.512 "task_count": 2048, 00:09:08.512 "sequence_count": 2048, 00:09:08.512 "buf_count": 2048 00:09:08.512 } 00:09:08.512 } 00:09:08.512 ] 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "subsystem": "bdev", 00:09:08.512 "config": [ 00:09:08.512 { 00:09:08.512 "method": "bdev_set_options", 00:09:08.512 "params": { 00:09:08.512 "bdev_io_pool_size": 65535, 00:09:08.512 "bdev_io_cache_size": 256, 00:09:08.512 "bdev_auto_examine": true, 00:09:08.512 "iobuf_small_cache_size": 128, 00:09:08.512 "iobuf_large_cache_size": 16 00:09:08.512 } 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "method": "bdev_raid_set_options", 00:09:08.512 "params": { 00:09:08.512 "process_window_size_kb": 1024, 00:09:08.512 "process_max_bandwidth_mb_sec": 0 00:09:08.512 } 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "method": "bdev_iscsi_set_options", 00:09:08.512 "params": { 00:09:08.512 "timeout_sec": 30 00:09:08.512 } 00:09:08.512 }, 00:09:08.512 { 00:09:08.512 "method": "bdev_nvme_set_options", 00:09:08.512 "params": { 00:09:08.512 "action_on_timeout": "none", 00:09:08.512 "timeout_us": 0, 00:09:08.512 "timeout_admin_us": 0, 00:09:08.512 "keep_alive_timeout_ms": 10000, 00:09:08.512 "arbitration_burst": 0, 00:09:08.512 "low_priority_weight": 0, 00:09:08.512 "medium_priority_weight": 0, 00:09:08.512 "high_priority_weight": 0, 00:09:08.512 "nvme_adminq_poll_period_us": 10000, 00:09:08.512 "nvme_ioq_poll_period_us": 0, 00:09:08.512 "io_queue_requests": 0, 00:09:08.512 "delay_cmd_submit": true, 00:09:08.512 "transport_retry_count": 4, 00:09:08.512 "bdev_retry_count": 3, 00:09:08.512 "transport_ack_timeout": 0, 00:09:08.512 "ctrlr_loss_timeout_sec": 0, 00:09:08.512 "reconnect_delay_sec": 0, 00:09:08.512 "fast_io_fail_timeout_sec": 0, 00:09:08.512 "disable_auto_failback": false, 00:09:08.512 "generate_uuids": false, 00:09:08.512 "transport_tos": 0, 00:09:08.512 "nvme_error_stat": false, 00:09:08.512 "rdma_srq_size": 0, 00:09:08.512 "io_path_stat": false, 00:09:08.512 "allow_accel_sequence": false, 00:09:08.512 "rdma_max_cq_size": 0, 00:09:08.512 "rdma_cm_event_timeout_ms": 0, 00:09:08.512 "dhchap_digests": [ 00:09:08.512 "sha256", 00:09:08.512 "sha384", 00:09:08.512 "sha512" 00:09:08.513 ], 00:09:08.513 "dhchap_dhgroups": [ 00:09:08.513 "null", 00:09:08.513 "ffdhe2048", 00:09:08.513 "ffdhe3072", 00:09:08.513 "ffdhe4096", 00:09:08.513 "ffdhe6144", 00:09:08.513 "ffdhe8192" 00:09:08.513 ] 00:09:08.513 } 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "method": "bdev_nvme_set_hotplug", 00:09:08.513 "params": { 00:09:08.513 "period_us": 100000, 00:09:08.513 "enable": false 00:09:08.513 } 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "method": "bdev_wait_for_examine" 00:09:08.513 } 00:09:08.513 ] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "scsi", 00:09:08.513 "config": null 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "scheduler", 00:09:08.513 "config": [ 00:09:08.513 { 00:09:08.513 "method": "framework_set_scheduler", 00:09:08.513 "params": { 00:09:08.513 "name": "static" 00:09:08.513 } 00:09:08.513 } 00:09:08.513 ] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "vhost_scsi", 00:09:08.513 "config": [] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "vhost_blk", 00:09:08.513 "config": [] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "ublk", 00:09:08.513 "config": [] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "nbd", 00:09:08.513 "config": [] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "nvmf", 00:09:08.513 "config": [ 00:09:08.513 { 00:09:08.513 "method": "nvmf_set_config", 00:09:08.513 "params": { 00:09:08.513 "discovery_filter": "match_any", 00:09:08.513 "admin_cmd_passthru": { 00:09:08.513 "identify_ctrlr": false 00:09:08.513 }, 00:09:08.513 "dhchap_digests": [ 00:09:08.513 "sha256", 00:09:08.513 "sha384", 00:09:08.513 "sha512" 00:09:08.513 ], 00:09:08.513 "dhchap_dhgroups": [ 00:09:08.513 "null", 00:09:08.513 "ffdhe2048", 00:09:08.513 "ffdhe3072", 00:09:08.513 "ffdhe4096", 00:09:08.513 "ffdhe6144", 00:09:08.513 "ffdhe8192" 00:09:08.513 ] 00:09:08.513 } 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "method": "nvmf_set_max_subsystems", 00:09:08.513 "params": { 00:09:08.513 "max_subsystems": 1024 00:09:08.513 } 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "method": "nvmf_set_crdt", 00:09:08.513 "params": { 00:09:08.513 "crdt1": 0, 00:09:08.513 "crdt2": 0, 00:09:08.513 "crdt3": 0 00:09:08.513 } 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "method": "nvmf_create_transport", 00:09:08.513 "params": { 00:09:08.513 "trtype": "TCP", 00:09:08.513 "max_queue_depth": 128, 00:09:08.513 "max_io_qpairs_per_ctrlr": 127, 00:09:08.513 "in_capsule_data_size": 4096, 00:09:08.513 "max_io_size": 131072, 00:09:08.513 "io_unit_size": 131072, 00:09:08.513 "max_aq_depth": 128, 00:09:08.513 "num_shared_buffers": 511, 00:09:08.513 "buf_cache_size": 4294967295, 00:09:08.513 "dif_insert_or_strip": false, 00:09:08.513 "zcopy": false, 00:09:08.513 "c2h_success": true, 00:09:08.513 "sock_priority": 0, 00:09:08.513 "abort_timeout_sec": 1, 00:09:08.513 "ack_timeout": 0, 00:09:08.513 "data_wr_pool_size": 0 00:09:08.513 } 00:09:08.513 } 00:09:08.513 ] 00:09:08.513 }, 00:09:08.513 { 00:09:08.513 "subsystem": "iscsi", 00:09:08.513 "config": [ 00:09:08.513 { 00:09:08.513 "method": "iscsi_set_options", 00:09:08.513 "params": { 00:09:08.513 "node_base": "iqn.2016-06.io.spdk", 00:09:08.513 "max_sessions": 128, 00:09:08.513 "max_connections_per_session": 2, 00:09:08.513 "max_queue_depth": 64, 00:09:08.513 "default_time2wait": 2, 00:09:08.513 "default_time2retain": 20, 00:09:08.513 "first_burst_length": 8192, 00:09:08.513 "immediate_data": true, 00:09:08.513 "allow_duplicated_isid": false, 00:09:08.513 "error_recovery_level": 0, 00:09:08.513 "nop_timeout": 60, 00:09:08.513 "nop_in_interval": 30, 00:09:08.513 "disable_chap": false, 00:09:08.513 "require_chap": false, 00:09:08.513 "mutual_chap": false, 00:09:08.513 "chap_group": 0, 00:09:08.513 "max_large_datain_per_connection": 64, 00:09:08.513 "max_r2t_per_connection": 4, 00:09:08.513 "pdu_pool_size": 36864, 00:09:08.513 "immediate_data_pool_size": 16384, 00:09:08.513 "data_out_pool_size": 2048 00:09:08.513 } 00:09:08.513 } 00:09:08.513 ] 00:09:08.513 } 00:09:08.513 ] 00:09:08.513 } 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1122630 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1122630 ']' 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1122630 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1122630 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1122630' 00:09:08.513 killing process with pid 1122630 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1122630 00:09:08.513 11:52:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1122630 00:09:08.773 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1122841 00:09:08.773 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:09:08.773 11:52:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1122841 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1122841 ']' 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1122841 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1122841 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1122841' 00:09:14.062 killing process with pid 1122841 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1122841 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1122841 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:09:14.062 00:09:14.062 real 0m6.569s 00:09:14.062 user 0m6.465s 00:09:14.062 sys 0m0.575s 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.062 11:52:38 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:09:14.062 ************************************ 00:09:14.062 END TEST skip_rpc_with_json 00:09:14.062 ************************************ 00:09:14.062 11:52:39 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:09:14.062 11:52:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.062 11:52:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.062 11:52:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.062 ************************************ 00:09:14.062 START TEST skip_rpc_with_delay 00:09:14.062 ************************************ 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:14.062 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:09:14.325 [2024-12-05 11:52:39.135029] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:09:14.325 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:09:14.325 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.325 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.325 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.325 00:09:14.325 real 0m0.081s 00:09:14.325 user 0m0.045s 00:09:14.325 sys 0m0.035s 00:09:14.325 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:14.325 11:52:39 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:09:14.325 ************************************ 00:09:14.325 END TEST skip_rpc_with_delay 00:09:14.325 ************************************ 00:09:14.325 11:52:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:09:14.325 11:52:39 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:09:14.325 11:52:39 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:09:14.325 11:52:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:14.325 11:52:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.325 11:52:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:14.325 ************************************ 00:09:14.325 START TEST exit_on_failed_rpc_init 00:09:14.325 ************************************ 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1124095 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1124095 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1124095 ']' 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:14.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:14.325 11:52:39 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:14.325 [2024-12-05 11:52:39.297917] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:14.325 [2024-12-05 11:52:39.297978] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124095 ] 00:09:14.585 [2024-12-05 11:52:39.386592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.585 [2024-12-05 11:52:39.429317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:09:15.156 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:09:15.156 [2024-12-05 11:52:40.177907] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:15.156 [2024-12-05 11:52:40.177960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124237 ] 00:09:15.415 [2024-12-05 11:52:40.265379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.415 [2024-12-05 11:52:40.301256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.415 [2024-12-05 11:52:40.301310] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:09:15.415 [2024-12-05 11:52:40.301319] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:09:15.415 [2024-12-05 11:52:40.301326] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1124095 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1124095 ']' 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1124095 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1124095 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1124095' 00:09:15.415 killing process with pid 1124095 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1124095 00:09:15.415 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1124095 00:09:15.675 00:09:15.675 real 0m1.358s 00:09:15.675 user 0m1.582s 00:09:15.675 sys 0m0.417s 00:09:15.675 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.675 11:52:40 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:09:15.675 ************************************ 00:09:15.675 END TEST exit_on_failed_rpc_init 00:09:15.675 ************************************ 00:09:15.675 11:52:40 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:09:15.675 00:09:15.675 real 0m13.798s 00:09:15.675 user 0m13.323s 00:09:15.675 sys 0m1.661s 00:09:15.675 11:52:40 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.675 11:52:40 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.675 ************************************ 00:09:15.675 END TEST skip_rpc 00:09:15.675 ************************************ 00:09:15.675 11:52:40 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:15.675 11:52:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.675 11:52:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.675 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:09:15.675 ************************************ 00:09:15.675 START TEST rpc_client 00:09:15.675 ************************************ 00:09:15.675 11:52:40 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:09:15.935 * Looking for test storage... 00:09:15.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@345 -- # : 1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@353 -- # local d=1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@355 -- # echo 1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@353 -- # local d=2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@355 -- # echo 2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.936 11:52:40 rpc_client -- scripts/common.sh@368 -- # return 0 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.936 --rc genhtml_branch_coverage=1 00:09:15.936 --rc genhtml_function_coverage=1 00:09:15.936 --rc genhtml_legend=1 00:09:15.936 --rc geninfo_all_blocks=1 00:09:15.936 --rc geninfo_unexecuted_blocks=1 00:09:15.936 00:09:15.936 ' 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.936 --rc genhtml_branch_coverage=1 00:09:15.936 --rc genhtml_function_coverage=1 00:09:15.936 --rc genhtml_legend=1 00:09:15.936 --rc geninfo_all_blocks=1 00:09:15.936 --rc geninfo_unexecuted_blocks=1 00:09:15.936 00:09:15.936 ' 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.936 --rc genhtml_branch_coverage=1 00:09:15.936 --rc genhtml_function_coverage=1 00:09:15.936 --rc genhtml_legend=1 00:09:15.936 --rc geninfo_all_blocks=1 00:09:15.936 --rc geninfo_unexecuted_blocks=1 00:09:15.936 00:09:15.936 ' 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.936 --rc genhtml_branch_coverage=1 00:09:15.936 --rc genhtml_function_coverage=1 00:09:15.936 --rc genhtml_legend=1 00:09:15.936 --rc geninfo_all_blocks=1 00:09:15.936 --rc geninfo_unexecuted_blocks=1 00:09:15.936 00:09:15.936 ' 00:09:15.936 11:52:40 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:09:15.936 OK 00:09:15.936 11:52:40 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:09:15.936 00:09:15.936 real 0m0.237s 00:09:15.936 user 0m0.142s 00:09:15.936 sys 0m0.109s 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.936 11:52:40 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:09:15.936 ************************************ 00:09:15.936 END TEST rpc_client 00:09:15.936 ************************************ 00:09:16.197 11:52:40 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:16.197 11:52:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:16.197 11:52:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.197 11:52:40 -- common/autotest_common.sh@10 -- # set +x 00:09:16.197 ************************************ 00:09:16.197 START TEST json_config 00:09:16.197 ************************************ 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.197 11:52:41 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.197 11:52:41 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.197 11:52:41 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.197 11:52:41 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.197 11:52:41 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.197 11:52:41 json_config -- scripts/common.sh@344 -- # case "$op" in 00:09:16.197 11:52:41 json_config -- scripts/common.sh@345 -- # : 1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.197 11:52:41 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.197 11:52:41 json_config -- scripts/common.sh@365 -- # decimal 1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@353 -- # local d=1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.197 11:52:41 json_config -- scripts/common.sh@355 -- # echo 1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.197 11:52:41 json_config -- scripts/common.sh@366 -- # decimal 2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@353 -- # local d=2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.197 11:52:41 json_config -- scripts/common.sh@355 -- # echo 2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.197 11:52:41 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.197 11:52:41 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.197 11:52:41 json_config -- scripts/common.sh@368 -- # return 0 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.197 --rc genhtml_branch_coverage=1 00:09:16.197 --rc genhtml_function_coverage=1 00:09:16.197 --rc genhtml_legend=1 00:09:16.197 --rc geninfo_all_blocks=1 00:09:16.197 --rc geninfo_unexecuted_blocks=1 00:09:16.197 00:09:16.197 ' 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.197 --rc genhtml_branch_coverage=1 00:09:16.197 --rc genhtml_function_coverage=1 00:09:16.197 --rc genhtml_legend=1 00:09:16.197 --rc geninfo_all_blocks=1 00:09:16.197 --rc geninfo_unexecuted_blocks=1 00:09:16.197 00:09:16.197 ' 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.197 --rc genhtml_branch_coverage=1 00:09:16.197 --rc genhtml_function_coverage=1 00:09:16.197 --rc genhtml_legend=1 00:09:16.197 --rc geninfo_all_blocks=1 00:09:16.197 --rc geninfo_unexecuted_blocks=1 00:09:16.197 00:09:16.197 ' 00:09:16.197 11:52:41 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.197 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.197 --rc genhtml_branch_coverage=1 00:09:16.197 --rc genhtml_function_coverage=1 00:09:16.197 --rc genhtml_legend=1 00:09:16.197 --rc geninfo_all_blocks=1 00:09:16.197 --rc geninfo_unexecuted_blocks=1 00:09:16.197 00:09:16.198 ' 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@7 -- # uname -s 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.198 11:52:41 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.198 11:52:41 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.198 11:52:41 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.198 11:52:41 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.198 11:52:41 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.198 11:52:41 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.198 11:52:41 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.198 11:52:41 json_config -- paths/export.sh@5 -- # export PATH 00:09:16.198 11:52:41 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:16.198 11:52:41 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:16.198 11:52:41 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:16.198 11:52:41 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@50 -- # : 0 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:16.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:16.198 11:52:41 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:09:16.198 INFO: JSON configuration test init 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:09:16.198 11:52:41 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:09:16.198 11:52:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.198 11:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.459 11:52:41 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.459 11:52:41 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:09:16.459 11:52:41 json_config -- json_config/common.sh@9 -- # local app=target 00:09:16.459 11:52:41 json_config -- json_config/common.sh@10 -- # shift 00:09:16.459 11:52:41 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:16.459 11:52:41 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:16.459 11:52:41 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:16.459 11:52:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:16.459 11:52:41 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:16.459 11:52:41 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1124693 00:09:16.459 11:52:41 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:16.459 Waiting for target to run... 00:09:16.459 11:52:41 json_config -- json_config/common.sh@25 -- # waitforlisten 1124693 /var/tmp/spdk_tgt.sock 00:09:16.459 11:52:41 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@835 -- # '[' -z 1124693 ']' 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:16.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:16.459 11:52:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:16.459 [2024-12-05 11:52:41.313374] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:16.459 [2024-12-05 11:52:41.313430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124693 ] 00:09:16.719 [2024-12-05 11:52:41.614377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.719 [2024-12-05 11:52:41.638568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.288 11:52:42 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.288 11:52:42 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:17.288 11:52:42 json_config -- json_config/common.sh@26 -- # echo '' 00:09:17.288 00:09:17.288 11:52:42 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:09:17.288 11:52:42 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:09:17.288 11:52:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.288 11:52:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.288 11:52:42 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:09:17.288 11:52:42 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:09:17.288 11:52:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.288 11:52:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.288 11:52:42 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:09:17.288 11:52:42 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:09:17.288 11:52:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:09:17.861 11:52:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.861 11:52:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:09:17.861 11:52:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@54 -- # sort 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:09:17.861 11:52:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:09:17.861 11:52:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.861 11:52:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:09:18.123 11:52:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.123 11:52:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:09:18.123 11:52:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:18.123 11:52:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:09:18.123 MallocForNvmf0 00:09:18.123 11:52:43 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:18.123 11:52:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:09:18.383 MallocForNvmf1 00:09:18.383 11:52:43 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:09:18.383 11:52:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:09:18.644 [2024-12-05 11:52:43.447007] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:18.644 11:52:43 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.644 11:52:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.644 11:52:43 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:18.644 11:52:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:09:18.906 11:52:43 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:18.906 11:52:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:09:19.167 11:52:44 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:19.167 11:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:09:19.167 [2024-12-05 11:52:44.153151] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:19.167 11:52:44 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:09:19.167 11:52:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.167 11:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.426 11:52:44 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:09:19.426 11:52:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.426 11:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.426 11:52:44 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:09:19.426 11:52:44 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:19.426 11:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:09:19.426 MallocBdevForConfigChangeCheck 00:09:19.426 11:52:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:09:19.426 11:52:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:19.426 11:52:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:19.426 11:52:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:09:19.426 11:52:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:19.995 11:52:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:09:19.995 INFO: shutting down applications... 00:09:19.995 11:52:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:09:19.995 11:52:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:09:19.995 11:52:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:09:19.995 11:52:44 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:09:20.255 Calling clear_iscsi_subsystem 00:09:20.255 Calling clear_nvmf_subsystem 00:09:20.255 Calling clear_nbd_subsystem 00:09:20.255 Calling clear_ublk_subsystem 00:09:20.255 Calling clear_vhost_blk_subsystem 00:09:20.255 Calling clear_vhost_scsi_subsystem 00:09:20.255 Calling clear_bdev_subsystem 00:09:20.255 11:52:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:09:20.255 11:52:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:09:20.255 11:52:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:09:20.255 11:52:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:09:20.255 11:52:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:20.255 11:52:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:09:20.825 11:52:45 json_config -- json_config/json_config.sh@352 -- # break 00:09:20.825 11:52:45 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:09:20.825 11:52:45 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:09:20.825 11:52:45 json_config -- json_config/common.sh@31 -- # local app=target 00:09:20.825 11:52:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:20.825 11:52:45 json_config -- json_config/common.sh@35 -- # [[ -n 1124693 ]] 00:09:20.825 11:52:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1124693 00:09:20.825 11:52:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:20.825 11:52:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:20.825 11:52:45 json_config -- json_config/common.sh@41 -- # kill -0 1124693 00:09:20.825 11:52:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:09:21.085 11:52:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:09:21.085 11:52:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:21.085 11:52:46 json_config -- json_config/common.sh@41 -- # kill -0 1124693 00:09:21.085 11:52:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:21.085 11:52:46 json_config -- json_config/common.sh@43 -- # break 00:09:21.085 11:52:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:21.085 11:52:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:21.085 SPDK target shutdown done 00:09:21.085 11:52:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:09:21.085 INFO: relaunching applications... 00:09:21.085 11:52:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:21.085 11:52:46 json_config -- json_config/common.sh@9 -- # local app=target 00:09:21.085 11:52:46 json_config -- json_config/common.sh@10 -- # shift 00:09:21.085 11:52:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:21.085 11:52:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:21.085 11:52:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:09:21.085 11:52:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.085 11:52:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:21.085 11:52:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1125772 00:09:21.085 11:52:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:21.085 Waiting for target to run... 00:09:21.085 11:52:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1125772 /var/tmp/spdk_tgt.sock 00:09:21.085 11:52:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 1125772 ']' 00:09:21.085 11:52:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:21.085 11:52:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.085 11:52:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:21.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:21.085 11:52:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.085 11:52:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:21.085 11:52:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:21.346 [2024-12-05 11:52:46.171163] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:21.346 [2024-12-05 11:52:46.171225] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125772 ] 00:09:21.607 [2024-12-05 11:52:46.529167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.607 [2024-12-05 11:52:46.562721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.178 [2024-12-05 11:52:47.061134] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.178 [2024-12-05 11:52:47.093510] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:09:22.178 11:52:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.178 11:52:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:09:22.178 11:52:47 json_config -- json_config/common.sh@26 -- # echo '' 00:09:22.178 00:09:22.178 11:52:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:09:22.178 11:52:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:09:22.178 INFO: Checking if target configuration is the same... 00:09:22.178 11:52:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:22.178 11:52:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:09:22.178 11:52:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:22.178 + '[' 2 -ne 2 ']' 00:09:22.178 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:22.178 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:22.178 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:22.178 +++ basename /dev/fd/62 00:09:22.178 ++ mktemp /tmp/62.XXX 00:09:22.178 + tmp_file_1=/tmp/62.iNz 00:09:22.178 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:22.178 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:22.178 + tmp_file_2=/tmp/spdk_tgt_config.json.775 00:09:22.178 + ret=0 00:09:22.178 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:22.439 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:22.701 + diff -u /tmp/62.iNz /tmp/spdk_tgt_config.json.775 00:09:22.701 + echo 'INFO: JSON config files are the same' 00:09:22.701 INFO: JSON config files are the same 00:09:22.701 + rm /tmp/62.iNz /tmp/spdk_tgt_config.json.775 00:09:22.701 + exit 0 00:09:22.701 11:52:47 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:09:22.701 11:52:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:09:22.701 INFO: changing configuration and checking if this can be detected... 00:09:22.701 11:52:47 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:22.701 11:52:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:09:22.701 11:52:47 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:22.701 11:52:47 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:09:22.701 11:52:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:09:22.701 + '[' 2 -ne 2 ']' 00:09:22.701 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:09:22.701 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:09:22.701 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:22.701 +++ basename /dev/fd/62 00:09:22.701 ++ mktemp /tmp/62.XXX 00:09:22.701 + tmp_file_1=/tmp/62.MDQ 00:09:22.701 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:22.701 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:09:22.701 + tmp_file_2=/tmp/spdk_tgt_config.json.o3L 00:09:22.701 + ret=0 00:09:22.701 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:23.272 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:09:23.272 + diff -u /tmp/62.MDQ /tmp/spdk_tgt_config.json.o3L 00:09:23.272 + ret=1 00:09:23.272 + echo '=== Start of file: /tmp/62.MDQ ===' 00:09:23.272 + cat /tmp/62.MDQ 00:09:23.272 + echo '=== End of file: /tmp/62.MDQ ===' 00:09:23.272 + echo '' 00:09:23.272 + echo '=== Start of file: /tmp/spdk_tgt_config.json.o3L ===' 00:09:23.272 + cat /tmp/spdk_tgt_config.json.o3L 00:09:23.272 + echo '=== End of file: /tmp/spdk_tgt_config.json.o3L ===' 00:09:23.272 + echo '' 00:09:23.272 + rm /tmp/62.MDQ /tmp/spdk_tgt_config.json.o3L 00:09:23.272 + exit 1 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:09:23.272 INFO: configuration change detected. 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:09:23.272 11:52:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.272 11:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 1125772 ]] 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:09:23.272 11:52:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.272 11:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:09:23.272 11:52:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.272 11:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.272 11:52:48 json_config -- json_config/json_config.sh@330 -- # killprocess 1125772 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@954 -- # '[' -z 1125772 ']' 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@958 -- # kill -0 1125772 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@959 -- # uname 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1125772 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1125772' 00:09:23.273 killing process with pid 1125772 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@973 -- # kill 1125772 00:09:23.273 11:52:48 json_config -- common/autotest_common.sh@978 -- # wait 1125772 00:09:23.533 11:52:48 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:09:23.533 11:52:48 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:09:23.533 11:52:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.533 11:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.533 11:52:48 json_config -- json_config/json_config.sh@335 -- # return 0 00:09:23.533 11:52:48 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:09:23.533 INFO: Success 00:09:23.533 00:09:23.533 real 0m7.478s 00:09:23.533 user 0m9.069s 00:09:23.533 sys 0m1.948s 00:09:23.533 11:52:48 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:23.533 11:52:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:09:23.533 ************************************ 00:09:23.533 END TEST json_config 00:09:23.533 ************************************ 00:09:23.533 11:52:48 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:23.533 11:52:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:23.533 11:52:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:23.533 11:52:48 -- common/autotest_common.sh@10 -- # set +x 00:09:23.796 ************************************ 00:09:23.796 START TEST json_config_extra_key 00:09:23.796 ************************************ 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.796 --rc genhtml_branch_coverage=1 00:09:23.796 --rc genhtml_function_coverage=1 00:09:23.796 --rc genhtml_legend=1 00:09:23.796 --rc geninfo_all_blocks=1 00:09:23.796 --rc geninfo_unexecuted_blocks=1 00:09:23.796 00:09:23.796 ' 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.796 --rc genhtml_branch_coverage=1 00:09:23.796 --rc genhtml_function_coverage=1 00:09:23.796 --rc genhtml_legend=1 00:09:23.796 --rc geninfo_all_blocks=1 00:09:23.796 --rc geninfo_unexecuted_blocks=1 00:09:23.796 00:09:23.796 ' 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.796 --rc genhtml_branch_coverage=1 00:09:23.796 --rc genhtml_function_coverage=1 00:09:23.796 --rc genhtml_legend=1 00:09:23.796 --rc geninfo_all_blocks=1 00:09:23.796 --rc geninfo_unexecuted_blocks=1 00:09:23.796 00:09:23.796 ' 00:09:23.796 11:52:48 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:23.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:23.796 --rc genhtml_branch_coverage=1 00:09:23.796 --rc genhtml_function_coverage=1 00:09:23.796 --rc genhtml_legend=1 00:09:23.796 --rc geninfo_all_blocks=1 00:09:23.796 --rc geninfo_unexecuted_blocks=1 00:09:23.796 00:09:23.796 ' 00:09:23.796 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:23.796 11:52:48 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:23.796 11:52:48 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.796 11:52:48 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.796 11:52:48 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.796 11:52:48 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:09:23.796 11:52:48 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:23.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:23.796 11:52:48 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:23.797 11:52:48 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:23.797 11:52:48 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:09:23.797 INFO: launching applications... 00:09:23.797 11:52:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1126307 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:09:23.797 Waiting for target to run... 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1126307 /var/tmp/spdk_tgt.sock 00:09:23.797 11:52:48 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1126307 ']' 00:09:23.797 11:52:48 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:09:23.797 11:52:48 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:09:23.797 11:52:48 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.797 11:52:48 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:09:23.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:09:23.797 11:52:48 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.797 11:52:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:24.058 [2024-12-05 11:52:48.853377] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:24.059 [2024-12-05 11:52:48.853482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126307 ] 00:09:24.319 [2024-12-05 11:52:49.132946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.319 [2024-12-05 11:52:49.158065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.892 11:52:49 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:24.892 11:52:49 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:09:24.892 00:09:24.892 11:52:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:09:24.892 INFO: shutting down applications... 00:09:24.892 11:52:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1126307 ]] 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1126307 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1126307 00:09:24.892 11:52:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1126307 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:09:25.153 11:52:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:09:25.153 SPDK target shutdown done 00:09:25.153 11:52:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:09:25.153 Success 00:09:25.153 00:09:25.153 real 0m1.566s 00:09:25.153 user 0m1.160s 00:09:25.153 sys 0m0.421s 00:09:25.153 11:52:50 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.153 11:52:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:09:25.153 ************************************ 00:09:25.153 END TEST json_config_extra_key 00:09:25.153 ************************************ 00:09:25.153 11:52:50 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:25.153 11:52:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:25.153 11:52:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.153 11:52:50 -- common/autotest_common.sh@10 -- # set +x 00:09:25.413 ************************************ 00:09:25.413 START TEST alias_rpc 00:09:25.413 ************************************ 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:09:25.413 * Looking for test storage... 00:09:25.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@345 -- # : 1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.413 11:52:50 alias_rpc -- scripts/common.sh@368 -- # return 0 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.413 --rc genhtml_branch_coverage=1 00:09:25.413 --rc genhtml_function_coverage=1 00:09:25.413 --rc genhtml_legend=1 00:09:25.413 --rc geninfo_all_blocks=1 00:09:25.413 --rc geninfo_unexecuted_blocks=1 00:09:25.413 00:09:25.413 ' 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.413 --rc genhtml_branch_coverage=1 00:09:25.413 --rc genhtml_function_coverage=1 00:09:25.413 --rc genhtml_legend=1 00:09:25.413 --rc geninfo_all_blocks=1 00:09:25.413 --rc geninfo_unexecuted_blocks=1 00:09:25.413 00:09:25.413 ' 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.413 --rc genhtml_branch_coverage=1 00:09:25.413 --rc genhtml_function_coverage=1 00:09:25.413 --rc genhtml_legend=1 00:09:25.413 --rc geninfo_all_blocks=1 00:09:25.413 --rc geninfo_unexecuted_blocks=1 00:09:25.413 00:09:25.413 ' 00:09:25.413 11:52:50 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.413 --rc genhtml_branch_coverage=1 00:09:25.413 --rc genhtml_function_coverage=1 00:09:25.413 --rc genhtml_legend=1 00:09:25.413 --rc geninfo_all_blocks=1 00:09:25.413 --rc geninfo_unexecuted_blocks=1 00:09:25.413 00:09:25.413 ' 00:09:25.413 11:52:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:09:25.414 11:52:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1126699 00:09:25.414 11:52:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1126699 00:09:25.414 11:52:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:25.414 11:52:50 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1126699 ']' 00:09:25.414 11:52:50 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:25.414 11:52:50 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:25.414 11:52:50 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:25.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:25.414 11:52:50 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:25.414 11:52:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:25.674 [2024-12-05 11:52:50.484748] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:25.674 [2024-12-05 11:52:50.484818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126699 ] 00:09:25.674 [2024-12-05 11:52:50.572244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.674 [2024-12-05 11:52:50.607748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.243 11:52:51 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:26.243 11:52:51 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:26.243 11:52:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:09:26.502 11:52:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1126699 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1126699 ']' 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1126699 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1126699 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1126699' 00:09:26.502 killing process with pid 1126699 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@973 -- # kill 1126699 00:09:26.502 11:52:51 alias_rpc -- common/autotest_common.sh@978 -- # wait 1126699 00:09:26.762 00:09:26.762 real 0m1.469s 00:09:26.762 user 0m1.602s 00:09:26.762 sys 0m0.412s 00:09:26.762 11:52:51 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.762 11:52:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:26.762 ************************************ 00:09:26.762 END TEST alias_rpc 00:09:26.762 ************************************ 00:09:26.762 11:52:51 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:09:26.762 11:52:51 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:26.762 11:52:51 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:26.762 11:52:51 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.762 11:52:51 -- common/autotest_common.sh@10 -- # set +x 00:09:26.762 ************************************ 00:09:26.762 START TEST spdkcli_tcp 00:09:26.762 ************************************ 00:09:26.762 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:09:27.073 * Looking for test storage... 00:09:27.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:27.074 11:52:51 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:27.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.074 --rc genhtml_branch_coverage=1 00:09:27.074 --rc genhtml_function_coverage=1 00:09:27.074 --rc genhtml_legend=1 00:09:27.074 --rc geninfo_all_blocks=1 00:09:27.074 --rc geninfo_unexecuted_blocks=1 00:09:27.074 00:09:27.074 ' 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:27.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.074 --rc genhtml_branch_coverage=1 00:09:27.074 --rc genhtml_function_coverage=1 00:09:27.074 --rc genhtml_legend=1 00:09:27.074 --rc geninfo_all_blocks=1 00:09:27.074 --rc geninfo_unexecuted_blocks=1 00:09:27.074 00:09:27.074 ' 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:27.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.074 --rc genhtml_branch_coverage=1 00:09:27.074 --rc genhtml_function_coverage=1 00:09:27.074 --rc genhtml_legend=1 00:09:27.074 --rc geninfo_all_blocks=1 00:09:27.074 --rc geninfo_unexecuted_blocks=1 00:09:27.074 00:09:27.074 ' 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:27.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:27.074 --rc genhtml_branch_coverage=1 00:09:27.074 --rc genhtml_function_coverage=1 00:09:27.074 --rc genhtml_legend=1 00:09:27.074 --rc geninfo_all_blocks=1 00:09:27.074 --rc geninfo_unexecuted_blocks=1 00:09:27.074 00:09:27.074 ' 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1127097 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1127097 00:09:27.074 11:52:51 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1127097 ']' 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:27.074 11:52:51 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:27.074 [2024-12-05 11:52:52.047714] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:27.074 [2024-12-05 11:52:52.047783] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127097 ] 00:09:27.334 [2024-12-05 11:52:52.135507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:27.334 [2024-12-05 11:52:52.171354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.334 [2024-12-05 11:52:52.171356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.903 11:52:52 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.903 11:52:52 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:09:27.903 11:52:52 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1127383 00:09:27.903 11:52:52 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:09:27.903 11:52:52 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:09:28.162 [ 00:09:28.162 "bdev_malloc_delete", 00:09:28.162 "bdev_malloc_create", 00:09:28.162 "bdev_null_resize", 00:09:28.162 "bdev_null_delete", 00:09:28.162 "bdev_null_create", 00:09:28.162 "bdev_nvme_cuse_unregister", 00:09:28.162 "bdev_nvme_cuse_register", 00:09:28.162 "bdev_opal_new_user", 00:09:28.162 "bdev_opal_set_lock_state", 00:09:28.162 "bdev_opal_delete", 00:09:28.162 "bdev_opal_get_info", 00:09:28.162 "bdev_opal_create", 00:09:28.162 "bdev_nvme_opal_revert", 00:09:28.162 "bdev_nvme_opal_init", 00:09:28.162 "bdev_nvme_send_cmd", 00:09:28.162 "bdev_nvme_set_keys", 00:09:28.162 "bdev_nvme_get_path_iostat", 00:09:28.162 "bdev_nvme_get_mdns_discovery_info", 00:09:28.162 "bdev_nvme_stop_mdns_discovery", 00:09:28.162 "bdev_nvme_start_mdns_discovery", 00:09:28.162 "bdev_nvme_set_multipath_policy", 00:09:28.162 "bdev_nvme_set_preferred_path", 00:09:28.162 "bdev_nvme_get_io_paths", 00:09:28.162 "bdev_nvme_remove_error_injection", 00:09:28.162 "bdev_nvme_add_error_injection", 00:09:28.162 "bdev_nvme_get_discovery_info", 00:09:28.162 "bdev_nvme_stop_discovery", 00:09:28.162 "bdev_nvme_start_discovery", 00:09:28.162 "bdev_nvme_get_controller_health_info", 00:09:28.162 "bdev_nvme_disable_controller", 00:09:28.162 "bdev_nvme_enable_controller", 00:09:28.162 "bdev_nvme_reset_controller", 00:09:28.162 "bdev_nvme_get_transport_statistics", 00:09:28.162 "bdev_nvme_apply_firmware", 00:09:28.162 "bdev_nvme_detach_controller", 00:09:28.162 "bdev_nvme_get_controllers", 00:09:28.162 "bdev_nvme_attach_controller", 00:09:28.162 "bdev_nvme_set_hotplug", 00:09:28.162 "bdev_nvme_set_options", 00:09:28.163 "bdev_passthru_delete", 00:09:28.163 "bdev_passthru_create", 00:09:28.163 "bdev_lvol_set_parent_bdev", 00:09:28.163 "bdev_lvol_set_parent", 00:09:28.163 "bdev_lvol_check_shallow_copy", 00:09:28.163 "bdev_lvol_start_shallow_copy", 00:09:28.163 "bdev_lvol_grow_lvstore", 00:09:28.163 "bdev_lvol_get_lvols", 00:09:28.163 "bdev_lvol_get_lvstores", 00:09:28.163 "bdev_lvol_delete", 00:09:28.163 "bdev_lvol_set_read_only", 00:09:28.163 "bdev_lvol_resize", 00:09:28.163 "bdev_lvol_decouple_parent", 00:09:28.163 "bdev_lvol_inflate", 00:09:28.163 "bdev_lvol_rename", 00:09:28.163 "bdev_lvol_clone_bdev", 00:09:28.163 "bdev_lvol_clone", 00:09:28.163 "bdev_lvol_snapshot", 00:09:28.163 "bdev_lvol_create", 00:09:28.163 "bdev_lvol_delete_lvstore", 00:09:28.163 "bdev_lvol_rename_lvstore", 00:09:28.163 "bdev_lvol_create_lvstore", 00:09:28.163 "bdev_raid_set_options", 00:09:28.163 "bdev_raid_remove_base_bdev", 00:09:28.163 "bdev_raid_add_base_bdev", 00:09:28.163 "bdev_raid_delete", 00:09:28.163 "bdev_raid_create", 00:09:28.163 "bdev_raid_get_bdevs", 00:09:28.163 "bdev_error_inject_error", 00:09:28.163 "bdev_error_delete", 00:09:28.163 "bdev_error_create", 00:09:28.163 "bdev_split_delete", 00:09:28.163 "bdev_split_create", 00:09:28.163 "bdev_delay_delete", 00:09:28.163 "bdev_delay_create", 00:09:28.163 "bdev_delay_update_latency", 00:09:28.163 "bdev_zone_block_delete", 00:09:28.163 "bdev_zone_block_create", 00:09:28.163 "blobfs_create", 00:09:28.163 "blobfs_detect", 00:09:28.163 "blobfs_set_cache_size", 00:09:28.163 "bdev_aio_delete", 00:09:28.163 "bdev_aio_rescan", 00:09:28.163 "bdev_aio_create", 00:09:28.163 "bdev_ftl_set_property", 00:09:28.163 "bdev_ftl_get_properties", 00:09:28.163 "bdev_ftl_get_stats", 00:09:28.163 "bdev_ftl_unmap", 00:09:28.163 "bdev_ftl_unload", 00:09:28.163 "bdev_ftl_delete", 00:09:28.163 "bdev_ftl_load", 00:09:28.163 "bdev_ftl_create", 00:09:28.163 "bdev_virtio_attach_controller", 00:09:28.163 "bdev_virtio_scsi_get_devices", 00:09:28.163 "bdev_virtio_detach_controller", 00:09:28.163 "bdev_virtio_blk_set_hotplug", 00:09:28.163 "bdev_iscsi_delete", 00:09:28.163 "bdev_iscsi_create", 00:09:28.163 "bdev_iscsi_set_options", 00:09:28.163 "accel_error_inject_error", 00:09:28.163 "ioat_scan_accel_module", 00:09:28.163 "dsa_scan_accel_module", 00:09:28.163 "iaa_scan_accel_module", 00:09:28.163 "vfu_virtio_create_fs_endpoint", 00:09:28.163 "vfu_virtio_create_scsi_endpoint", 00:09:28.163 "vfu_virtio_scsi_remove_target", 00:09:28.163 "vfu_virtio_scsi_add_target", 00:09:28.163 "vfu_virtio_create_blk_endpoint", 00:09:28.163 "vfu_virtio_delete_endpoint", 00:09:28.163 "keyring_file_remove_key", 00:09:28.163 "keyring_file_add_key", 00:09:28.163 "keyring_linux_set_options", 00:09:28.163 "fsdev_aio_delete", 00:09:28.163 "fsdev_aio_create", 00:09:28.163 "iscsi_get_histogram", 00:09:28.163 "iscsi_enable_histogram", 00:09:28.163 "iscsi_set_options", 00:09:28.163 "iscsi_get_auth_groups", 00:09:28.163 "iscsi_auth_group_remove_secret", 00:09:28.163 "iscsi_auth_group_add_secret", 00:09:28.163 "iscsi_delete_auth_group", 00:09:28.163 "iscsi_create_auth_group", 00:09:28.163 "iscsi_set_discovery_auth", 00:09:28.163 "iscsi_get_options", 00:09:28.163 "iscsi_target_node_request_logout", 00:09:28.163 "iscsi_target_node_set_redirect", 00:09:28.163 "iscsi_target_node_set_auth", 00:09:28.163 "iscsi_target_node_add_lun", 00:09:28.163 "iscsi_get_stats", 00:09:28.163 "iscsi_get_connections", 00:09:28.163 "iscsi_portal_group_set_auth", 00:09:28.163 "iscsi_start_portal_group", 00:09:28.163 "iscsi_delete_portal_group", 00:09:28.163 "iscsi_create_portal_group", 00:09:28.163 "iscsi_get_portal_groups", 00:09:28.163 "iscsi_delete_target_node", 00:09:28.163 "iscsi_target_node_remove_pg_ig_maps", 00:09:28.163 "iscsi_target_node_add_pg_ig_maps", 00:09:28.163 "iscsi_create_target_node", 00:09:28.163 "iscsi_get_target_nodes", 00:09:28.163 "iscsi_delete_initiator_group", 00:09:28.163 "iscsi_initiator_group_remove_initiators", 00:09:28.163 "iscsi_initiator_group_add_initiators", 00:09:28.163 "iscsi_create_initiator_group", 00:09:28.163 "iscsi_get_initiator_groups", 00:09:28.163 "nvmf_set_crdt", 00:09:28.163 "nvmf_set_config", 00:09:28.163 "nvmf_set_max_subsystems", 00:09:28.163 "nvmf_stop_mdns_prr", 00:09:28.163 "nvmf_publish_mdns_prr", 00:09:28.163 "nvmf_subsystem_get_listeners", 00:09:28.163 "nvmf_subsystem_get_qpairs", 00:09:28.163 "nvmf_subsystem_get_controllers", 00:09:28.163 "nvmf_get_stats", 00:09:28.163 "nvmf_get_transports", 00:09:28.163 "nvmf_create_transport", 00:09:28.163 "nvmf_get_targets", 00:09:28.163 "nvmf_delete_target", 00:09:28.163 "nvmf_create_target", 00:09:28.163 "nvmf_subsystem_allow_any_host", 00:09:28.163 "nvmf_subsystem_set_keys", 00:09:28.163 "nvmf_subsystem_remove_host", 00:09:28.163 "nvmf_subsystem_add_host", 00:09:28.163 "nvmf_ns_remove_host", 00:09:28.163 "nvmf_ns_add_host", 00:09:28.163 "nvmf_subsystem_remove_ns", 00:09:28.163 "nvmf_subsystem_set_ns_ana_group", 00:09:28.163 "nvmf_subsystem_add_ns", 00:09:28.163 "nvmf_subsystem_listener_set_ana_state", 00:09:28.163 "nvmf_discovery_get_referrals", 00:09:28.163 "nvmf_discovery_remove_referral", 00:09:28.163 "nvmf_discovery_add_referral", 00:09:28.163 "nvmf_subsystem_remove_listener", 00:09:28.163 "nvmf_subsystem_add_listener", 00:09:28.163 "nvmf_delete_subsystem", 00:09:28.163 "nvmf_create_subsystem", 00:09:28.163 "nvmf_get_subsystems", 00:09:28.163 "env_dpdk_get_mem_stats", 00:09:28.163 "nbd_get_disks", 00:09:28.163 "nbd_stop_disk", 00:09:28.163 "nbd_start_disk", 00:09:28.163 "ublk_recover_disk", 00:09:28.163 "ublk_get_disks", 00:09:28.163 "ublk_stop_disk", 00:09:28.163 "ublk_start_disk", 00:09:28.163 "ublk_destroy_target", 00:09:28.163 "ublk_create_target", 00:09:28.163 "virtio_blk_create_transport", 00:09:28.163 "virtio_blk_get_transports", 00:09:28.163 "vhost_controller_set_coalescing", 00:09:28.163 "vhost_get_controllers", 00:09:28.163 "vhost_delete_controller", 00:09:28.163 "vhost_create_blk_controller", 00:09:28.163 "vhost_scsi_controller_remove_target", 00:09:28.163 "vhost_scsi_controller_add_target", 00:09:28.163 "vhost_start_scsi_controller", 00:09:28.163 "vhost_create_scsi_controller", 00:09:28.163 "thread_set_cpumask", 00:09:28.163 "scheduler_set_options", 00:09:28.163 "framework_get_governor", 00:09:28.163 "framework_get_scheduler", 00:09:28.163 "framework_set_scheduler", 00:09:28.163 "framework_get_reactors", 00:09:28.163 "thread_get_io_channels", 00:09:28.163 "thread_get_pollers", 00:09:28.163 "thread_get_stats", 00:09:28.163 "framework_monitor_context_switch", 00:09:28.163 "spdk_kill_instance", 00:09:28.163 "log_enable_timestamps", 00:09:28.163 "log_get_flags", 00:09:28.163 "log_clear_flag", 00:09:28.163 "log_set_flag", 00:09:28.163 "log_get_level", 00:09:28.163 "log_set_level", 00:09:28.163 "log_get_print_level", 00:09:28.163 "log_set_print_level", 00:09:28.163 "framework_enable_cpumask_locks", 00:09:28.163 "framework_disable_cpumask_locks", 00:09:28.163 "framework_wait_init", 00:09:28.163 "framework_start_init", 00:09:28.163 "scsi_get_devices", 00:09:28.163 "bdev_get_histogram", 00:09:28.163 "bdev_enable_histogram", 00:09:28.163 "bdev_set_qos_limit", 00:09:28.163 "bdev_set_qd_sampling_period", 00:09:28.163 "bdev_get_bdevs", 00:09:28.163 "bdev_reset_iostat", 00:09:28.163 "bdev_get_iostat", 00:09:28.163 "bdev_examine", 00:09:28.163 "bdev_wait_for_examine", 00:09:28.163 "bdev_set_options", 00:09:28.163 "accel_get_stats", 00:09:28.163 "accel_set_options", 00:09:28.163 "accel_set_driver", 00:09:28.163 "accel_crypto_key_destroy", 00:09:28.163 "accel_crypto_keys_get", 00:09:28.163 "accel_crypto_key_create", 00:09:28.163 "accel_assign_opc", 00:09:28.163 "accel_get_module_info", 00:09:28.163 "accel_get_opc_assignments", 00:09:28.163 "vmd_rescan", 00:09:28.163 "vmd_remove_device", 00:09:28.163 "vmd_enable", 00:09:28.163 "sock_get_default_impl", 00:09:28.163 "sock_set_default_impl", 00:09:28.163 "sock_impl_set_options", 00:09:28.163 "sock_impl_get_options", 00:09:28.163 "iobuf_get_stats", 00:09:28.163 "iobuf_set_options", 00:09:28.163 "keyring_get_keys", 00:09:28.163 "vfu_tgt_set_base_path", 00:09:28.163 "framework_get_pci_devices", 00:09:28.163 "framework_get_config", 00:09:28.163 "framework_get_subsystems", 00:09:28.163 "fsdev_set_opts", 00:09:28.163 "fsdev_get_opts", 00:09:28.163 "trace_get_info", 00:09:28.163 "trace_get_tpoint_group_mask", 00:09:28.163 "trace_disable_tpoint_group", 00:09:28.163 "trace_enable_tpoint_group", 00:09:28.163 "trace_clear_tpoint_mask", 00:09:28.163 "trace_set_tpoint_mask", 00:09:28.163 "notify_get_notifications", 00:09:28.163 "notify_get_types", 00:09:28.163 "spdk_get_version", 00:09:28.163 "rpc_get_methods" 00:09:28.163 ] 00:09:28.163 11:52:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:09:28.163 11:52:53 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.163 11:52:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.163 11:52:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.163 11:52:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1127097 00:09:28.163 11:52:53 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1127097 ']' 00:09:28.163 11:52:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1127097 00:09:28.163 11:52:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:09:28.163 11:52:53 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.164 11:52:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127097 00:09:28.164 11:52:53 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:28.164 11:52:53 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:28.164 11:52:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127097' 00:09:28.164 killing process with pid 1127097 00:09:28.164 11:52:53 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1127097 00:09:28.164 11:52:53 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1127097 00:09:28.423 00:09:28.423 real 0m1.539s 00:09:28.423 user 0m2.799s 00:09:28.423 sys 0m0.473s 00:09:28.423 11:52:53 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:28.423 11:52:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.423 ************************************ 00:09:28.423 END TEST spdkcli_tcp 00:09:28.423 ************************************ 00:09:28.423 11:52:53 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:28.423 11:52:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.423 11:52:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.423 11:52:53 -- common/autotest_common.sh@10 -- # set +x 00:09:28.423 ************************************ 00:09:28.423 START TEST dpdk_mem_utility 00:09:28.423 ************************************ 00:09:28.423 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:09:28.683 * Looking for test storage... 00:09:28.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:28.683 11:52:53 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.683 --rc genhtml_branch_coverage=1 00:09:28.683 --rc genhtml_function_coverage=1 00:09:28.683 --rc genhtml_legend=1 00:09:28.683 --rc geninfo_all_blocks=1 00:09:28.683 --rc geninfo_unexecuted_blocks=1 00:09:28.683 00:09:28.683 ' 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.683 --rc genhtml_branch_coverage=1 00:09:28.683 --rc genhtml_function_coverage=1 00:09:28.683 --rc genhtml_legend=1 00:09:28.683 --rc geninfo_all_blocks=1 00:09:28.683 --rc geninfo_unexecuted_blocks=1 00:09:28.683 00:09:28.683 ' 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.683 --rc genhtml_branch_coverage=1 00:09:28.683 --rc genhtml_function_coverage=1 00:09:28.683 --rc genhtml_legend=1 00:09:28.683 --rc geninfo_all_blocks=1 00:09:28.683 --rc geninfo_unexecuted_blocks=1 00:09:28.683 00:09:28.683 ' 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:28.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:28.683 --rc genhtml_branch_coverage=1 00:09:28.683 --rc genhtml_function_coverage=1 00:09:28.683 --rc genhtml_legend=1 00:09:28.683 --rc geninfo_all_blocks=1 00:09:28.683 --rc geninfo_unexecuted_blocks=1 00:09:28.683 00:09:28.683 ' 00:09:28.683 11:52:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:28.683 11:52:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1127513 00:09:28.683 11:52:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1127513 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1127513 ']' 00:09:28.683 11:52:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.683 11:52:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:28.683 [2024-12-05 11:52:53.655054] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:28.683 [2024-12-05 11:52:53.655118] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127513 ] 00:09:28.943 [2024-12-05 11:52:53.743004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.943 [2024-12-05 11:52:53.777801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:29.515 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:29.515 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:09:29.515 11:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:09:29.515 11:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:09:29.515 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.515 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:29.515 { 00:09:29.515 "filename": "/tmp/spdk_mem_dump.txt" 00:09:29.515 } 00:09:29.515 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.515 11:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:09:29.515 DPDK memory size 818.000000 MiB in 1 heap(s) 00:09:29.515 1 heaps totaling size 818.000000 MiB 00:09:29.515 size: 818.000000 MiB heap id: 0 00:09:29.515 end heaps---------- 00:09:29.515 9 mempools totaling size 603.782043 MiB 00:09:29.515 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:09:29.515 size: 158.602051 MiB name: PDU_data_out_Pool 00:09:29.515 size: 100.555481 MiB name: bdev_io_1127513 00:09:29.515 size: 50.003479 MiB name: msgpool_1127513 00:09:29.515 size: 36.509338 MiB name: fsdev_io_1127513 00:09:29.515 size: 21.763794 MiB name: PDU_Pool 00:09:29.515 size: 19.513306 MiB name: SCSI_TASK_Pool 00:09:29.515 size: 4.133484 MiB name: evtpool_1127513 00:09:29.515 size: 0.026123 MiB name: Session_Pool 00:09:29.515 end mempools------- 00:09:29.515 6 memzones totaling size 4.142822 MiB 00:09:29.515 size: 1.000366 MiB name: RG_ring_0_1127513 00:09:29.515 size: 1.000366 MiB name: RG_ring_1_1127513 00:09:29.515 size: 1.000366 MiB name: RG_ring_4_1127513 00:09:29.515 size: 1.000366 MiB name: RG_ring_5_1127513 00:09:29.515 size: 0.125366 MiB name: RG_ring_2_1127513 00:09:29.515 size: 0.015991 MiB name: RG_ring_3_1127513 00:09:29.515 end memzones------- 00:09:29.515 11:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:09:29.515 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:09:29.515 list of free elements. size: 10.852478 MiB 00:09:29.515 element at address: 0x200019200000 with size: 0.999878 MiB 00:09:29.515 element at address: 0x200019400000 with size: 0.999878 MiB 00:09:29.515 element at address: 0x200000400000 with size: 0.998535 MiB 00:09:29.515 element at address: 0x200032000000 with size: 0.994446 MiB 00:09:29.515 element at address: 0x200006400000 with size: 0.959839 MiB 00:09:29.515 element at address: 0x200012c00000 with size: 0.944275 MiB 00:09:29.515 element at address: 0x200019600000 with size: 0.936584 MiB 00:09:29.515 element at address: 0x200000200000 with size: 0.717346 MiB 00:09:29.515 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:09:29.515 element at address: 0x200000c00000 with size: 0.495422 MiB 00:09:29.515 element at address: 0x20000a600000 with size: 0.490723 MiB 00:09:29.515 element at address: 0x200019800000 with size: 0.485657 MiB 00:09:29.515 element at address: 0x200003e00000 with size: 0.481934 MiB 00:09:29.515 element at address: 0x200028200000 with size: 0.410034 MiB 00:09:29.515 element at address: 0x200000800000 with size: 0.355042 MiB 00:09:29.515 list of standard malloc elements. size: 199.218628 MiB 00:09:29.515 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:09:29.515 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:09:29.515 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:09:29.515 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:09:29.515 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:09:29.515 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:09:29.515 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:09:29.515 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:09:29.515 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:09:29.515 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000085b040 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000085f300 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000087f680 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200000cff000 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200003efb980 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:09:29.515 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200028268f80 with size: 0.000183 MiB 00:09:29.515 element at address: 0x200028269040 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:09:29.515 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:09:29.515 list of memzone associated elements. size: 607.928894 MiB 00:09:29.515 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:09:29.515 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:09:29.515 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:09:29.515 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:09:29.515 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:09:29.515 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1127513_0 00:09:29.515 element at address: 0x200000dff380 with size: 48.003052 MiB 00:09:29.515 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1127513_0 00:09:29.516 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:09:29.516 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1127513_0 00:09:29.516 element at address: 0x2000199be940 with size: 20.255554 MiB 00:09:29.516 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:09:29.516 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:09:29.516 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:09:29.516 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:09:29.516 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1127513_0 00:09:29.516 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:09:29.516 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1127513 00:09:29.516 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:09:29.516 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1127513 00:09:29.516 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:09:29.516 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:09:29.516 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:09:29.516 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:09:29.516 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:09:29.516 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:09:29.516 element at address: 0x200003efba40 with size: 1.008118 MiB 00:09:29.516 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:09:29.516 element at address: 0x200000cff180 with size: 1.000488 MiB 00:09:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1127513 00:09:29.516 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:09:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1127513 00:09:29.516 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:09:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1127513 00:09:29.516 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:09:29.516 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1127513 00:09:29.516 element at address: 0x20000087f740 with size: 0.500488 MiB 00:09:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1127513 00:09:29.516 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:09:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1127513 00:09:29.516 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:09:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:09:29.516 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:09:29.516 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:09:29.516 element at address: 0x20001987c540 with size: 0.250488 MiB 00:09:29.516 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:09:29.516 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:09:29.516 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1127513 00:09:29.516 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:09:29.516 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1127513 00:09:29.516 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:09:29.516 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:09:29.516 element at address: 0x200028269100 with size: 0.023743 MiB 00:09:29.516 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:09:29.516 element at address: 0x20000085b100 with size: 0.016113 MiB 00:09:29.516 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1127513 00:09:29.516 element at address: 0x20002826f240 with size: 0.002441 MiB 00:09:29.516 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:09:29.516 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:09:29.516 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1127513 00:09:29.516 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:09:29.516 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1127513 00:09:29.516 element at address: 0x20000085af00 with size: 0.000305 MiB 00:09:29.516 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1127513 00:09:29.516 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:09:29.516 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:09:29.516 11:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:09:29.516 11:52:54 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1127513 00:09:29.516 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1127513 ']' 00:09:29.516 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1127513 00:09:29.516 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:09:29.516 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:29.516 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1127513 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1127513' 00:09:29.776 killing process with pid 1127513 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1127513 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1127513 00:09:29.776 00:09:29.776 real 0m1.393s 00:09:29.776 user 0m1.470s 00:09:29.776 sys 0m0.411s 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:29.776 11:52:54 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:09:29.776 ************************************ 00:09:29.776 END TEST dpdk_mem_utility 00:09:29.776 ************************************ 00:09:29.776 11:52:54 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:29.776 11:52:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:29.777 11:52:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:29.777 11:52:54 -- common/autotest_common.sh@10 -- # set +x 00:09:30.037 ************************************ 00:09:30.037 START TEST event 00:09:30.037 ************************************ 00:09:30.037 11:52:54 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:09:30.037 * Looking for test storage... 00:09:30.037 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:30.037 11:52:54 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.037 11:52:54 event -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.037 11:52:54 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.037 11:52:55 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.037 11:52:55 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.037 11:52:55 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.037 11:52:55 event -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.037 11:52:55 event -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.037 11:52:55 event -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.037 11:52:55 event -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.037 11:52:55 event -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.037 11:52:55 event -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.037 11:52:55 event -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.037 11:52:55 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.037 11:52:55 event -- scripts/common.sh@344 -- # case "$op" in 00:09:30.037 11:52:55 event -- scripts/common.sh@345 -- # : 1 00:09:30.037 11:52:55 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.037 11:52:55 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.037 11:52:55 event -- scripts/common.sh@365 -- # decimal 1 00:09:30.037 11:52:55 event -- scripts/common.sh@353 -- # local d=1 00:09:30.037 11:52:55 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.037 11:52:55 event -- scripts/common.sh@355 -- # echo 1 00:09:30.037 11:52:55 event -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.037 11:52:55 event -- scripts/common.sh@366 -- # decimal 2 00:09:30.037 11:52:55 event -- scripts/common.sh@353 -- # local d=2 00:09:30.037 11:52:55 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.037 11:52:55 event -- scripts/common.sh@355 -- # echo 2 00:09:30.037 11:52:55 event -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.037 11:52:55 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.037 11:52:55 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.037 11:52:55 event -- scripts/common.sh@368 -- # return 0 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.037 --rc genhtml_branch_coverage=1 00:09:30.037 --rc genhtml_function_coverage=1 00:09:30.037 --rc genhtml_legend=1 00:09:30.037 --rc geninfo_all_blocks=1 00:09:30.037 --rc geninfo_unexecuted_blocks=1 00:09:30.037 00:09:30.037 ' 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.037 --rc genhtml_branch_coverage=1 00:09:30.037 --rc genhtml_function_coverage=1 00:09:30.037 --rc genhtml_legend=1 00:09:30.037 --rc geninfo_all_blocks=1 00:09:30.037 --rc geninfo_unexecuted_blocks=1 00:09:30.037 00:09:30.037 ' 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.037 --rc genhtml_branch_coverage=1 00:09:30.037 --rc genhtml_function_coverage=1 00:09:30.037 --rc genhtml_legend=1 00:09:30.037 --rc geninfo_all_blocks=1 00:09:30.037 --rc geninfo_unexecuted_blocks=1 00:09:30.037 00:09:30.037 ' 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.037 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.037 --rc genhtml_branch_coverage=1 00:09:30.037 --rc genhtml_function_coverage=1 00:09:30.037 --rc genhtml_legend=1 00:09:30.037 --rc geninfo_all_blocks=1 00:09:30.037 --rc geninfo_unexecuted_blocks=1 00:09:30.037 00:09:30.037 ' 00:09:30.037 11:52:55 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:30.037 11:52:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:09:30.037 11:52:55 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:30.037 11:52:55 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.037 11:52:55 event -- common/autotest_common.sh@10 -- # set +x 00:09:30.296 ************************************ 00:09:30.296 START TEST event_perf 00:09:30.296 ************************************ 00:09:30.296 11:52:55 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:09:30.296 Running I/O for 1 seconds...[2024-12-05 11:52:55.129068] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:30.296 [2024-12-05 11:52:55.129171] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127914 ] 00:09:30.296 [2024-12-05 11:52:55.228024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.297 [2024-12-05 11:52:55.267083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.297 [2024-12-05 11:52:55.267240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.297 [2024-12-05 11:52:55.267528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:30.297 [2024-12-05 11:52:55.267638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:31.678 Running I/O for 1 seconds... 00:09:31.678 lcore 0: 175318 00:09:31.678 lcore 1: 175321 00:09:31.678 lcore 2: 175320 00:09:31.678 lcore 3: 175320 00:09:31.678 done. 00:09:31.678 00:09:31.678 real 0m1.187s 00:09:31.678 user 0m4.087s 00:09:31.678 sys 0m0.096s 00:09:31.678 11:52:56 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.678 11:52:56 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:09:31.678 ************************************ 00:09:31.678 END TEST event_perf 00:09:31.678 ************************************ 00:09:31.678 11:52:56 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:31.678 11:52:56 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:31.678 11:52:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.678 11:52:56 event -- common/autotest_common.sh@10 -- # set +x 00:09:31.678 ************************************ 00:09:31.678 START TEST event_reactor 00:09:31.678 ************************************ 00:09:31.678 11:52:56 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:09:31.678 [2024-12-05 11:52:56.391463] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:31.678 [2024-12-05 11:52:56.391560] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128267 ] 00:09:31.678 [2024-12-05 11:52:56.479136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.678 [2024-12-05 11:52:56.513332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.619 test_start 00:09:32.619 oneshot 00:09:32.619 tick 100 00:09:32.619 tick 100 00:09:32.619 tick 250 00:09:32.619 tick 100 00:09:32.619 tick 100 00:09:32.619 tick 100 00:09:32.619 tick 250 00:09:32.619 tick 500 00:09:32.619 tick 100 00:09:32.619 tick 100 00:09:32.619 tick 250 00:09:32.619 tick 100 00:09:32.619 tick 100 00:09:32.619 test_end 00:09:32.619 00:09:32.619 real 0m1.169s 00:09:32.619 user 0m1.088s 00:09:32.619 sys 0m0.078s 00:09:32.619 11:52:57 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:32.619 11:52:57 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:09:32.619 ************************************ 00:09:32.619 END TEST event_reactor 00:09:32.619 ************************************ 00:09:32.619 11:52:57 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:32.619 11:52:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:32.619 11:52:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:32.619 11:52:57 event -- common/autotest_common.sh@10 -- # set +x 00:09:32.619 ************************************ 00:09:32.619 START TEST event_reactor_perf 00:09:32.619 ************************************ 00:09:32.619 11:52:57 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:09:32.619 [2024-12-05 11:52:57.635930] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:32.619 [2024-12-05 11:52:57.636035] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128441 ] 00:09:32.881 [2024-12-05 11:52:57.722590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.881 [2024-12-05 11:52:57.759108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.822 test_start 00:09:33.822 test_end 00:09:33.822 Performance: 539813 events per second 00:09:33.822 00:09:33.822 real 0m1.169s 00:09:33.822 user 0m1.089s 00:09:33.822 sys 0m0.076s 00:09:33.822 11:52:58 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:33.822 11:52:58 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:09:33.822 ************************************ 00:09:33.822 END TEST event_reactor_perf 00:09:33.822 ************************************ 00:09:33.822 11:52:58 event -- event/event.sh@49 -- # uname -s 00:09:33.822 11:52:58 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:09:33.822 11:52:58 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:33.822 11:52:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:33.822 11:52:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:33.822 11:52:58 event -- common/autotest_common.sh@10 -- # set +x 00:09:33.822 ************************************ 00:09:33.822 START TEST event_scheduler 00:09:33.822 ************************************ 00:09:33.822 11:52:58 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:09:34.083 * Looking for test storage... 00:09:34.083 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:09:34.083 11:52:58 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:34.083 11:52:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:09:34.083 11:52:58 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:34.083 11:52:59 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:09:34.083 11:52:59 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:09:34.084 11:52:59 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:34.084 11:52:59 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:34.084 11:52:59 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.084 --rc genhtml_branch_coverage=1 00:09:34.084 --rc genhtml_function_coverage=1 00:09:34.084 --rc genhtml_legend=1 00:09:34.084 --rc geninfo_all_blocks=1 00:09:34.084 --rc geninfo_unexecuted_blocks=1 00:09:34.084 00:09:34.084 ' 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.084 --rc genhtml_branch_coverage=1 00:09:34.084 --rc genhtml_function_coverage=1 00:09:34.084 --rc genhtml_legend=1 00:09:34.084 --rc geninfo_all_blocks=1 00:09:34.084 --rc geninfo_unexecuted_blocks=1 00:09:34.084 00:09:34.084 ' 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.084 --rc genhtml_branch_coverage=1 00:09:34.084 --rc genhtml_function_coverage=1 00:09:34.084 --rc genhtml_legend=1 00:09:34.084 --rc geninfo_all_blocks=1 00:09:34.084 --rc geninfo_unexecuted_blocks=1 00:09:34.084 00:09:34.084 ' 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:34.084 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:34.084 --rc genhtml_branch_coverage=1 00:09:34.084 --rc genhtml_function_coverage=1 00:09:34.084 --rc genhtml_legend=1 00:09:34.084 --rc geninfo_all_blocks=1 00:09:34.084 --rc geninfo_unexecuted_blocks=1 00:09:34.084 00:09:34.084 ' 00:09:34.084 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:09:34.084 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1128711 00:09:34.084 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:09:34.084 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1128711 00:09:34.084 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1128711 ']' 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:34.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:34.084 11:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:34.084 [2024-12-05 11:52:59.118083] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:34.084 [2024-12-05 11:52:59.118159] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128711 ] 00:09:34.345 [2024-12-05 11:52:59.209615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:34.345 [2024-12-05 11:52:59.263871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.345 [2024-12-05 11:52:59.264033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:34.345 [2024-12-05 11:52:59.264193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:34.345 [2024-12-05 11:52:59.264193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:09:34.917 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:34.917 [2024-12-05 11:52:59.938495] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:09:34.917 [2024-12-05 11:52:59.938513] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:09:34.917 [2024-12-05 11:52:59.938523] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:09:34.917 [2024-12-05 11:52:59.938529] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:09:34.917 [2024-12-05 11:52:59.938535] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.917 11:52:59 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.917 11:52:59 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 [2024-12-05 11:53:00.007854] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:09:35.178 11:53:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:09:35.178 11:53:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:35.178 11:53:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 ************************************ 00:09:35.178 START TEST scheduler_create_thread 00:09:35.178 ************************************ 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 2 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 3 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 4 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 5 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 6 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.178 7 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.178 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.179 8 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.179 9 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.179 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:35.813 10 00:09:35.813 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.813 11:53:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:09:35.813 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.813 11:53:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.251 11:53:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.251 11:53:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:09:37.251 11:53:01 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:09:37.251 11:53:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.251 11:53:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:37.821 11:53:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:37.821 11:53:02 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:09:37.821 11:53:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:37.821 11:53:02 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:38.760 11:53:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.760 11:53:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:09:38.760 11:53:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:09:38.760 11:53:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.760 11:53:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.330 11:53:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.330 00:09:39.330 real 0m4.224s 00:09:39.330 user 0m0.023s 00:09:39.330 sys 0m0.009s 00:09:39.331 11:53:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.331 11:53:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:09:39.331 ************************************ 00:09:39.331 END TEST scheduler_create_thread 00:09:39.331 ************************************ 00:09:39.331 11:53:04 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:09:39.331 11:53:04 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1128711 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1128711 ']' 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1128711 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1128711 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1128711' 00:09:39.331 killing process with pid 1128711 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1128711 00:09:39.331 11:53:04 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1128711 00:09:39.592 [2024-12-05 11:53:04.553282] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:09:39.852 00:09:39.852 real 0m5.847s 00:09:39.852 user 0m12.910s 00:09:39.852 sys 0m0.430s 00:09:39.852 11:53:04 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.852 11:53:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:09:39.852 ************************************ 00:09:39.852 END TEST event_scheduler 00:09:39.852 ************************************ 00:09:39.852 11:53:04 event -- event/event.sh@51 -- # modprobe -n nbd 00:09:39.852 11:53:04 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:09:39.852 11:53:04 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:39.852 11:53:04 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.852 11:53:04 event -- common/autotest_common.sh@10 -- # set +x 00:09:39.852 ************************************ 00:09:39.852 START TEST app_repeat 00:09:39.852 ************************************ 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1130078 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1130078' 00:09:39.852 Process app_repeat pid: 1130078 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:09:39.852 spdk_app_start Round 0 00:09:39.852 11:53:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1130078 /var/tmp/spdk-nbd.sock 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1130078 ']' 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:39.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:39.852 11:53:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:39.852 [2024-12-05 11:53:04.836232] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:39.852 [2024-12-05 11:53:04.836290] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1130078 ] 00:09:40.123 [2024-12-05 11:53:04.921738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:40.123 [2024-12-05 11:53:04.953738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.123 [2024-12-05 11:53:04.953738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:40.123 11:53:05 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:40.123 11:53:05 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:40.123 11:53:05 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:40.383 Malloc0 00:09:40.383 11:53:05 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:40.383 Malloc1 00:09:40.383 11:53:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.383 11:53:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:40.642 /dev/nbd0 00:09:40.642 11:53:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:40.642 11:53:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.642 1+0 records in 00:09:40.642 1+0 records out 00:09:40.642 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027483 s, 14.9 MB/s 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.642 11:53:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:40.642 11:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.642 11:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.642 11:53:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:40.901 /dev/nbd1 00:09:40.901 11:53:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:40.901 11:53:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:40.901 11:53:05 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:40.901 11:53:05 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:40.901 11:53:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:40.902 1+0 records in 00:09:40.902 1+0 records out 00:09:40.902 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000169131 s, 24.2 MB/s 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:40.902 11:53:05 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:40.902 11:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:40.902 11:53:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:40.902 11:53:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:40.902 11:53:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:40.902 11:53:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:41.162 { 00:09:41.162 "nbd_device": "/dev/nbd0", 00:09:41.162 "bdev_name": "Malloc0" 00:09:41.162 }, 00:09:41.162 { 00:09:41.162 "nbd_device": "/dev/nbd1", 00:09:41.162 "bdev_name": "Malloc1" 00:09:41.162 } 00:09:41.162 ]' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:41.162 { 00:09:41.162 "nbd_device": "/dev/nbd0", 00:09:41.162 "bdev_name": "Malloc0" 00:09:41.162 }, 00:09:41.162 { 00:09:41.162 "nbd_device": "/dev/nbd1", 00:09:41.162 "bdev_name": "Malloc1" 00:09:41.162 } 00:09:41.162 ]' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:41.162 /dev/nbd1' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:41.162 /dev/nbd1' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:41.162 256+0 records in 00:09:41.162 256+0 records out 00:09:41.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011636 s, 90.1 MB/s 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:41.162 256+0 records in 00:09:41.162 256+0 records out 00:09:41.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122027 s, 85.9 MB/s 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:41.162 256+0 records in 00:09:41.162 256+0 records out 00:09:41.162 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0130545 s, 80.3 MB/s 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.162 11:53:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:41.422 11:53:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:41.681 11:53:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:41.940 11:53:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:41.940 11:53:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:41.940 11:53:06 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:42.199 [2024-12-05 11:53:07.062394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:42.199 [2024-12-05 11:53:07.091955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.199 [2024-12-05 11:53:07.091955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:42.199 [2024-12-05 11:53:07.121101] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:42.199 [2024-12-05 11:53:07.121133] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:45.492 11:53:09 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:45.492 11:53:09 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:09:45.492 spdk_app_start Round 1 00:09:45.492 11:53:09 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1130078 /var/tmp/spdk-nbd.sock 00:09:45.492 11:53:09 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1130078 ']' 00:09:45.492 11:53:09 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:45.492 11:53:09 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.492 11:53:09 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:45.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:45.492 11:53:09 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.492 11:53:09 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:45.492 11:53:10 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.492 11:53:10 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:45.492 11:53:10 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.492 Malloc0 00:09:45.492 11:53:10 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:45.492 Malloc1 00:09:45.754 11:53:10 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:45.754 /dev/nbd0 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:45.754 1+0 records in 00:09:45.754 1+0 records out 00:09:45.754 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000270148 s, 15.2 MB/s 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:45.754 11:53:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:45.754 11:53:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:46.016 /dev/nbd1 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:46.016 1+0 records in 00:09:46.016 1+0 records out 00:09:46.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279959 s, 14.6 MB/s 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:46.016 11:53:10 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.016 11:53:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:46.278 { 00:09:46.278 "nbd_device": "/dev/nbd0", 00:09:46.278 "bdev_name": "Malloc0" 00:09:46.278 }, 00:09:46.278 { 00:09:46.278 "nbd_device": "/dev/nbd1", 00:09:46.278 "bdev_name": "Malloc1" 00:09:46.278 } 00:09:46.278 ]' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:46.278 { 00:09:46.278 "nbd_device": "/dev/nbd0", 00:09:46.278 "bdev_name": "Malloc0" 00:09:46.278 }, 00:09:46.278 { 00:09:46.278 "nbd_device": "/dev/nbd1", 00:09:46.278 "bdev_name": "Malloc1" 00:09:46.278 } 00:09:46.278 ]' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:46.278 /dev/nbd1' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:46.278 /dev/nbd1' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:46.278 256+0 records in 00:09:46.278 256+0 records out 00:09:46.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0123873 s, 84.6 MB/s 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:46.278 256+0 records in 00:09:46.278 256+0 records out 00:09:46.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.012445 s, 84.3 MB/s 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:46.278 256+0 records in 00:09:46.278 256+0 records out 00:09:46.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0131703 s, 79.6 MB/s 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.278 11:53:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:46.539 11:53:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:46.799 11:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:47.061 11:53:11 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:47.061 11:53:11 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:47.061 11:53:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:47.322 [2024-12-05 11:53:12.175563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:47.322 [2024-12-05 11:53:12.205291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.322 [2024-12-05 11:53:12.205291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.322 [2024-12-05 11:53:12.234811] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:47.322 [2024-12-05 11:53:12.234841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:50.622 11:53:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:09:50.622 11:53:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:09:50.622 spdk_app_start Round 2 00:09:50.622 11:53:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1130078 /var/tmp/spdk-nbd.sock 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1130078 ']' 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:50.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:50.622 11:53:15 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:50.622 11:53:15 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.622 Malloc0 00:09:50.622 11:53:15 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:09:50.622 Malloc1 00:09:50.622 11:53:15 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.622 11:53:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:09:50.882 /dev/nbd0 00:09:50.882 11:53:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:50.882 11:53:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:50.882 1+0 records in 00:09:50.882 1+0 records out 00:09:50.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267557 s, 15.3 MB/s 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:50.882 11:53:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:50.882 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:50.882 11:53:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:50.882 11:53:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:09:51.143 /dev/nbd1 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:09:51.143 1+0 records in 00:09:51.143 1+0 records out 00:09:51.143 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273766 s, 15.0 MB/s 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:09:51.143 11:53:16 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.143 11:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:51.405 { 00:09:51.405 "nbd_device": "/dev/nbd0", 00:09:51.405 "bdev_name": "Malloc0" 00:09:51.405 }, 00:09:51.405 { 00:09:51.405 "nbd_device": "/dev/nbd1", 00:09:51.405 "bdev_name": "Malloc1" 00:09:51.405 } 00:09:51.405 ]' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:51.405 { 00:09:51.405 "nbd_device": "/dev/nbd0", 00:09:51.405 "bdev_name": "Malloc0" 00:09:51.405 }, 00:09:51.405 { 00:09:51.405 "nbd_device": "/dev/nbd1", 00:09:51.405 "bdev_name": "Malloc1" 00:09:51.405 } 00:09:51.405 ]' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:09:51.405 /dev/nbd1' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:09:51.405 /dev/nbd1' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:09:51.405 256+0 records in 00:09:51.405 256+0 records out 00:09:51.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00575208 s, 182 MB/s 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:09:51.405 256+0 records in 00:09:51.405 256+0 records out 00:09:51.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01155 s, 90.8 MB/s 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:09:51.405 256+0 records in 00:09:51.405 256+0 records out 00:09:51.405 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124088 s, 84.5 MB/s 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.405 11:53:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:09:51.667 11:53:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:51.668 11:53:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:09:51.928 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:52.188 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:09:52.188 11:53:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:09:52.188 11:53:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:09:52.188 11:53:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:09:52.188 11:53:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:09:52.188 11:53:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:09:52.188 11:53:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:09:52.188 11:53:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:09:52.448 [2024-12-05 11:53:17.252704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:52.448 [2024-12-05 11:53:17.282121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.448 [2024-12-05 11:53:17.282121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.448 [2024-12-05 11:53:17.311261] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:09:52.448 [2024-12-05 11:53:17.311290] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:09:55.748 11:53:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1130078 /var/tmp/spdk-nbd.sock 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1130078 ']' 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:09:55.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:09:55.748 11:53:20 event.app_repeat -- event/event.sh@39 -- # killprocess 1130078 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1130078 ']' 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1130078 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1130078 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1130078' 00:09:55.748 killing process with pid 1130078 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1130078 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1130078 00:09:55.748 spdk_app_start is called in Round 0. 00:09:55.748 Shutdown signal received, stop current app iteration 00:09:55.748 Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 reinitialization... 00:09:55.748 spdk_app_start is called in Round 1. 00:09:55.748 Shutdown signal received, stop current app iteration 00:09:55.748 Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 reinitialization... 00:09:55.748 spdk_app_start is called in Round 2. 00:09:55.748 Shutdown signal received, stop current app iteration 00:09:55.748 Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 reinitialization... 00:09:55.748 spdk_app_start is called in Round 3. 00:09:55.748 Shutdown signal received, stop current app iteration 00:09:55.748 11:53:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:09:55.748 11:53:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:09:55.748 00:09:55.748 real 0m15.714s 00:09:55.748 user 0m34.751s 00:09:55.748 sys 0m2.247s 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.748 11:53:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 ************************************ 00:09:55.748 END TEST app_repeat 00:09:55.748 ************************************ 00:09:55.748 11:53:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:09:55.748 11:53:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:55.748 11:53:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.748 11:53:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.748 11:53:20 event -- common/autotest_common.sh@10 -- # set +x 00:09:55.748 ************************************ 00:09:55.748 START TEST cpu_locks 00:09:55.748 ************************************ 00:09:55.748 11:53:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:09:55.748 * Looking for test storage... 00:09:55.748 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:09:55.748 11:53:20 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:55.748 11:53:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:09:55.748 11:53:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:55.748 11:53:20 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:09:55.748 11:53:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:55.749 11:53:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.749 --rc genhtml_branch_coverage=1 00:09:55.749 --rc genhtml_function_coverage=1 00:09:55.749 --rc genhtml_legend=1 00:09:55.749 --rc geninfo_all_blocks=1 00:09:55.749 --rc geninfo_unexecuted_blocks=1 00:09:55.749 00:09:55.749 ' 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.749 --rc genhtml_branch_coverage=1 00:09:55.749 --rc genhtml_function_coverage=1 00:09:55.749 --rc genhtml_legend=1 00:09:55.749 --rc geninfo_all_blocks=1 00:09:55.749 --rc geninfo_unexecuted_blocks=1 00:09:55.749 00:09:55.749 ' 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.749 --rc genhtml_branch_coverage=1 00:09:55.749 --rc genhtml_function_coverage=1 00:09:55.749 --rc genhtml_legend=1 00:09:55.749 --rc geninfo_all_blocks=1 00:09:55.749 --rc geninfo_unexecuted_blocks=1 00:09:55.749 00:09:55.749 ' 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:55.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:55.749 --rc genhtml_branch_coverage=1 00:09:55.749 --rc genhtml_function_coverage=1 00:09:55.749 --rc genhtml_legend=1 00:09:55.749 --rc geninfo_all_blocks=1 00:09:55.749 --rc geninfo_unexecuted_blocks=1 00:09:55.749 00:09:55.749 ' 00:09:55.749 11:53:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:09:55.749 11:53:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:09:55.749 11:53:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:09:55.749 11:53:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:55.749 11:53:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:56.009 ************************************ 00:09:56.009 START TEST default_locks 00:09:56.009 ************************************ 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1133350 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1133350 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1133350 ']' 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.009 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.010 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.010 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.010 11:53:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:56.010 [2024-12-05 11:53:20.898438] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:56.010 [2024-12-05 11:53:20.898510] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133350 ] 00:09:56.010 [2024-12-05 11:53:20.985944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.010 [2024-12-05 11:53:21.020881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:56.948 lslocks: write error 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1133350 ']' 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1133350' 00:09:56.948 killing process with pid 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1133350 00:09:56.948 11:53:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1133350 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1133350 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1133350 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1133350 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1133350 ']' 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1133350) - No such process 00:09:57.209 ERROR: process (pid: 1133350) is no longer running 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:57.209 00:09:57.209 real 0m1.246s 00:09:57.209 user 0m1.343s 00:09:57.209 sys 0m0.414s 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.209 11:53:22 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.209 ************************************ 00:09:57.209 END TEST default_locks 00:09:57.209 ************************************ 00:09:57.209 11:53:22 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:09:57.209 11:53:22 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.209 11:53:22 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.209 11:53:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:57.209 ************************************ 00:09:57.209 START TEST default_locks_via_rpc 00:09:57.209 ************************************ 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1133709 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1133709 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1133709 ']' 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:57.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:57.209 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.209 [2024-12-05 11:53:22.217629] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:57.209 [2024-12-05 11:53:22.217683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1133709 ] 00:09:57.470 [2024-12-05 11:53:22.301850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.470 [2024-12-05 11:53:22.332693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:58.040 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:58.040 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:58.040 11:53:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:09:58.040 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 11:53:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1133709 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1133709 00:09:58.040 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1133709 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1133709 ']' 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1133709 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1133709 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1133709' 00:09:58.610 killing process with pid 1133709 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1133709 00:09:58.610 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1133709 00:09:58.872 00:09:58.872 real 0m1.646s 00:09:58.872 user 0m1.770s 00:09:58.872 sys 0m0.562s 00:09:58.872 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:58.872 11:53:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.872 ************************************ 00:09:58.872 END TEST default_locks_via_rpc 00:09:58.872 ************************************ 00:09:58.872 11:53:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:09:58.872 11:53:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:58.872 11:53:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:58.872 11:53:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:58.872 ************************************ 00:09:58.872 START TEST non_locking_app_on_locked_coremask 00:09:58.872 ************************************ 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1134075 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1134075 /var/tmp/spdk.sock 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1134075 ']' 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:58.872 11:53:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.132 [2024-12-05 11:53:23.939061] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:59.132 [2024-12-05 11:53:23.939113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134075 ] 00:09:59.132 [2024-12-05 11:53:24.023226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.132 [2024-12-05 11:53:24.053540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1134260 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1134260 /var/tmp/spdk2.sock 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1134260 ']' 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:59.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:59.703 11:53:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:59.966 [2024-12-05 11:53:24.793321] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:09:59.966 [2024-12-05 11:53:24.793375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134260 ] 00:09:59.966 [2024-12-05 11:53:24.881469] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:59.966 [2024-12-05 11:53:24.881500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.966 [2024-12-05 11:53:24.944059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.538 11:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:00.538 11:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:00.538 11:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1134075 00:10:00.538 11:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1134075 00:10:00.538 11:53:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:01.477 lslocks: write error 00:10:01.477 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1134075 00:10:01.477 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1134075 ']' 00:10:01.477 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1134075 00:10:01.477 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:01.477 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.477 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1134075 00:10:01.478 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.478 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.478 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1134075' 00:10:01.478 killing process with pid 1134075 00:10:01.478 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1134075 00:10:01.478 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1134075 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1134260 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1134260 ']' 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1134260 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1134260 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1134260' 00:10:01.738 killing process with pid 1134260 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1134260 00:10:01.738 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1134260 00:10:01.998 00:10:01.998 real 0m2.972s 00:10:01.998 user 0m3.322s 00:10:01.998 sys 0m0.922s 00:10:01.998 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.998 11:53:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.998 ************************************ 00:10:01.998 END TEST non_locking_app_on_locked_coremask 00:10:01.998 ************************************ 00:10:01.998 11:53:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:10:01.998 11:53:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:01.998 11:53:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.998 11:53:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:01.998 ************************************ 00:10:01.998 START TEST locking_app_on_unlocked_coremask 00:10:01.998 ************************************ 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1134786 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1134786 /var/tmp/spdk.sock 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1134786 ']' 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.998 11:53:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:01.998 [2024-12-05 11:53:26.997107] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:01.998 [2024-12-05 11:53:26.997168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134786 ] 00:10:02.258 [2024-12-05 11:53:27.080815] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:02.258 [2024-12-05 11:53:27.080838] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.258 [2024-12-05 11:53:27.111677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1134798 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1134798 /var/tmp/spdk2.sock 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1134798 ']' 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:02.830 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:02.830 11:53:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:02.830 [2024-12-05 11:53:27.805939] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:02.830 [2024-12-05 11:53:27.805989] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1134798 ] 00:10:03.091 [2024-12-05 11:53:27.895244] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.091 [2024-12-05 11:53:27.953361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.664 11:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.664 11:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:03.664 11:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1134798 00:10:03.664 11:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:03.664 11:53:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1134798 00:10:04.234 lslocks: write error 00:10:04.234 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1134786 00:10:04.234 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1134786 ']' 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1134786 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1134786 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1134786' 00:10:04.235 killing process with pid 1134786 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1134786 00:10:04.235 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1134786 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1134798 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1134798 ']' 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1134798 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1134798 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1134798' 00:10:04.804 killing process with pid 1134798 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1134798 00:10:04.804 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1134798 00:10:05.066 00:10:05.066 real 0m2.957s 00:10:05.066 user 0m3.262s 00:10:05.066 sys 0m0.932s 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.066 ************************************ 00:10:05.066 END TEST locking_app_on_unlocked_coremask 00:10:05.066 ************************************ 00:10:05.066 11:53:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:10:05.066 11:53:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:05.066 11:53:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.066 11:53:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:05.066 ************************************ 00:10:05.066 START TEST locking_app_on_locked_coremask 00:10:05.066 ************************************ 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1135412 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1135412 /var/tmp/spdk.sock 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1135412 ']' 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.066 11:53:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.066 [2024-12-05 11:53:30.022290] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:05.066 [2024-12-05 11:53:30.022346] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135412 ] 00:10:05.066 [2024-12-05 11:53:30.107049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.327 [2024-12-05 11:53:30.147722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1135505 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1135505 /var/tmp/spdk2.sock 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1135505 /var/tmp/spdk2.sock 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1135505 /var/tmp/spdk2.sock 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1135505 ']' 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:05.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.899 11:53:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:05.899 [2024-12-05 11:53:30.882028] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:05.899 [2024-12-05 11:53:30.882083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135505 ] 00:10:06.180 [2024-12-05 11:53:30.968211] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1135412 has claimed it. 00:10:06.180 [2024-12-05 11:53:30.968247] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:06.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1135505) - No such process 00:10:06.751 ERROR: process (pid: 1135505) is no longer running 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1135412 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1135412 00:10:06.751 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:10:07.011 lslocks: write error 00:10:07.011 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1135412 00:10:07.011 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1135412 ']' 00:10:07.011 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1135412 00:10:07.011 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:10:07.011 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:07.011 11:53:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1135412 00:10:07.011 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:07.011 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:07.011 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1135412' 00:10:07.011 killing process with pid 1135412 00:10:07.011 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1135412 00:10:07.011 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1135412 00:10:07.271 00:10:07.271 real 0m2.271s 00:10:07.271 user 0m2.554s 00:10:07.271 sys 0m0.664s 00:10:07.271 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.271 11:53:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.271 ************************************ 00:10:07.271 END TEST locking_app_on_locked_coremask 00:10:07.271 ************************************ 00:10:07.271 11:53:32 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:10:07.271 11:53:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:07.271 11:53:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.271 11:53:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:07.271 ************************************ 00:10:07.271 START TEST locking_overlapped_coremask 00:10:07.271 ************************************ 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1135871 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1135871 /var/tmp/spdk.sock 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1135871 ']' 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:07.271 11:53:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:07.532 [2024-12-05 11:53:32.366292] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:07.532 [2024-12-05 11:53:32.366339] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1135871 ] 00:10:07.532 [2024-12-05 11:53:32.449541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:07.532 [2024-12-05 11:53:32.482013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:07.532 [2024-12-05 11:53:32.482163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.532 [2024-12-05 11:53:32.482165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1136040 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1136040 /var/tmp/spdk2.sock 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1136040 /var/tmp/spdk2.sock 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1136040 /var/tmp/spdk2.sock 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1136040 ']' 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:08.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:08.473 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:08.473 [2024-12-05 11:53:33.218280] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:08.473 [2024-12-05 11:53:33.218336] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136040 ] 00:10:08.473 [2024-12-05 11:53:33.330859] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1135871 has claimed it. 00:10:08.473 [2024-12-05 11:53:33.330902] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:10:08.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1136040) - No such process 00:10:08.794 ERROR: process (pid: 1136040) is no longer running 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:08.794 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1135871 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1135871 ']' 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1135871 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1135871 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1135871' 00:10:09.124 killing process with pid 1135871 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1135871 00:10:09.124 11:53:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1135871 00:10:09.124 00:10:09.124 real 0m1.778s 00:10:09.124 user 0m5.177s 00:10:09.124 sys 0m0.373s 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 ************************************ 00:10:09.124 END TEST locking_overlapped_coremask 00:10:09.124 ************************************ 00:10:09.124 11:53:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:10:09.124 11:53:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.124 11:53:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.124 11:53:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:09.124 ************************************ 00:10:09.124 START TEST locking_overlapped_coremask_via_rpc 00:10:09.124 ************************************ 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1136252 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1136252 /var/tmp/spdk.sock 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1136252 ']' 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:09.124 11:53:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.384 [2024-12-05 11:53:34.224243] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:09.384 [2024-12-05 11:53:34.224304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136252 ] 00:10:09.384 [2024-12-05 11:53:34.308612] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:09.384 [2024-12-05 11:53:34.308638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:09.384 [2024-12-05 11:53:34.343580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.384 [2024-12-05 11:53:34.343839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.384 [2024-12-05 11:53:34.343839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1136490 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1136490 /var/tmp/spdk2.sock 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1136490 ']' 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:10.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.323 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.323 [2024-12-05 11:53:35.063701] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:10.323 [2024-12-05 11:53:35.063754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1136490 ] 00:10:10.323 [2024-12-05 11:53:35.175444] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:10:10.323 [2024-12-05 11:53:35.175476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:10.323 [2024-12-05 11:53:35.249370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:10.323 [2024-12-05 11:53:35.249527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:10.323 [2024-12-05 11:53:35.249529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:10.892 [2024-12-05 11:53:35.846533] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1136252 has claimed it. 00:10:10.892 request: 00:10:10.892 { 00:10:10.892 "method": "framework_enable_cpumask_locks", 00:10:10.892 "req_id": 1 00:10:10.892 } 00:10:10.892 Got JSON-RPC error response 00:10:10.892 response: 00:10:10.892 { 00:10:10.892 "code": -32603, 00:10:10.892 "message": "Failed to claim CPU core: 2" 00:10:10.892 } 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1136252 /var/tmp/spdk.sock 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1136252 ']' 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.892 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:10.893 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.893 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:10.893 11:53:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1136490 /var/tmp/spdk2.sock 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1136490 ']' 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:10:11.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.152 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:10:11.411 00:10:11.411 real 0m2.049s 00:10:11.411 user 0m0.871s 00:10:11.411 sys 0m0.114s 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.411 11:53:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:11.411 ************************************ 00:10:11.411 END TEST locking_overlapped_coremask_via_rpc 00:10:11.411 ************************************ 00:10:11.411 11:53:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:10:11.411 11:53:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1136252 ]] 00:10:11.411 11:53:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1136252 00:10:11.411 11:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1136252 ']' 00:10:11.411 11:53:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1136252 00:10:11.411 11:53:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:11.411 11:53:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.411 11:53:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136252 00:10:11.412 11:53:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:11.412 11:53:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:11.412 11:53:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136252' 00:10:11.412 killing process with pid 1136252 00:10:11.412 11:53:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1136252 00:10:11.412 11:53:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1136252 00:10:11.672 11:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1136490 ]] 00:10:11.672 11:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1136490 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1136490 ']' 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1136490 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1136490 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:10:11.672 11:53:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1136490' 00:10:11.673 killing process with pid 1136490 00:10:11.673 11:53:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1136490 00:10:11.673 11:53:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1136490 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1136252 ]] 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1136252 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1136252 ']' 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1136252 00:10:11.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1136252) - No such process 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1136252 is not found' 00:10:11.933 Process with pid 1136252 is not found 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1136490 ]] 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1136490 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1136490 ']' 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1136490 00:10:11.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1136490) - No such process 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1136490 is not found' 00:10:11.933 Process with pid 1136490 is not found 00:10:11.933 11:53:36 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:10:11.933 00:10:11.933 real 0m16.183s 00:10:11.933 user 0m28.211s 00:10:11.933 sys 0m4.904s 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.933 11:53:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:10:11.933 ************************************ 00:10:11.933 END TEST cpu_locks 00:10:11.933 ************************************ 00:10:11.933 00:10:11.933 real 0m41.950s 00:10:11.933 user 1m22.413s 00:10:11.933 sys 0m8.262s 00:10:11.933 11:53:36 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.933 11:53:36 event -- common/autotest_common.sh@10 -- # set +x 00:10:11.933 ************************************ 00:10:11.933 END TEST event 00:10:11.933 ************************************ 00:10:11.933 11:53:36 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:11.933 11:53:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:11.933 11:53:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.933 11:53:36 -- common/autotest_common.sh@10 -- # set +x 00:10:11.933 ************************************ 00:10:11.933 START TEST thread 00:10:11.933 ************************************ 00:10:11.933 11:53:36 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:10:12.193 * Looking for test storage... 00:10:12.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:10:12.193 11:53:36 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:12.193 11:53:36 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:10:12.193 11:53:36 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:12.193 11:53:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:12.193 11:53:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:12.193 11:53:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:12.193 11:53:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:10:12.193 11:53:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:10:12.193 11:53:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:10:12.193 11:53:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:10:12.193 11:53:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:10:12.193 11:53:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:10:12.193 11:53:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:10:12.193 11:53:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:12.193 11:53:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:10:12.193 11:53:37 thread -- scripts/common.sh@345 -- # : 1 00:10:12.193 11:53:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:12.193 11:53:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:12.193 11:53:37 thread -- scripts/common.sh@365 -- # decimal 1 00:10:12.193 11:53:37 thread -- scripts/common.sh@353 -- # local d=1 00:10:12.193 11:53:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:12.193 11:53:37 thread -- scripts/common.sh@355 -- # echo 1 00:10:12.193 11:53:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:10:12.193 11:53:37 thread -- scripts/common.sh@366 -- # decimal 2 00:10:12.193 11:53:37 thread -- scripts/common.sh@353 -- # local d=2 00:10:12.193 11:53:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:12.193 11:53:37 thread -- scripts/common.sh@355 -- # echo 2 00:10:12.193 11:53:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:10:12.193 11:53:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:12.193 11:53:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:12.193 11:53:37 thread -- scripts/common.sh@368 -- # return 0 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:12.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.193 --rc genhtml_branch_coverage=1 00:10:12.193 --rc genhtml_function_coverage=1 00:10:12.193 --rc genhtml_legend=1 00:10:12.193 --rc geninfo_all_blocks=1 00:10:12.193 --rc geninfo_unexecuted_blocks=1 00:10:12.193 00:10:12.193 ' 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:12.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.193 --rc genhtml_branch_coverage=1 00:10:12.193 --rc genhtml_function_coverage=1 00:10:12.193 --rc genhtml_legend=1 00:10:12.193 --rc geninfo_all_blocks=1 00:10:12.193 --rc geninfo_unexecuted_blocks=1 00:10:12.193 00:10:12.193 ' 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:12.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.193 --rc genhtml_branch_coverage=1 00:10:12.193 --rc genhtml_function_coverage=1 00:10:12.193 --rc genhtml_legend=1 00:10:12.193 --rc geninfo_all_blocks=1 00:10:12.193 --rc geninfo_unexecuted_blocks=1 00:10:12.193 00:10:12.193 ' 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:12.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:12.193 --rc genhtml_branch_coverage=1 00:10:12.193 --rc genhtml_function_coverage=1 00:10:12.193 --rc genhtml_legend=1 00:10:12.193 --rc geninfo_all_blocks=1 00:10:12.193 --rc geninfo_unexecuted_blocks=1 00:10:12.193 00:10:12.193 ' 00:10:12.193 11:53:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:12.193 11:53:37 thread -- common/autotest_common.sh@10 -- # set +x 00:10:12.193 ************************************ 00:10:12.193 START TEST thread_poller_perf 00:10:12.193 ************************************ 00:10:12.193 11:53:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:10:12.193 [2024-12-05 11:53:37.154373] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:12.193 [2024-12-05 11:53:37.154487] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137033 ] 00:10:12.193 [2024-12-05 11:53:37.212544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.193 [2024-12-05 11:53:37.242534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.193 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:10:13.574 [2024-12-05T10:53:38.623Z] ====================================== 00:10:13.574 [2024-12-05T10:53:38.623Z] busy:2407638220 (cyc) 00:10:13.574 [2024-12-05T10:53:38.623Z] total_run_count: 415000 00:10:13.574 [2024-12-05T10:53:38.623Z] tsc_hz: 2400000000 (cyc) 00:10:13.574 [2024-12-05T10:53:38.623Z] ====================================== 00:10:13.574 [2024-12-05T10:53:38.623Z] poller_cost: 5801 (cyc), 2417 (nsec) 00:10:13.574 00:10:13.574 real 0m1.142s 00:10:13.574 user 0m1.086s 00:10:13.574 sys 0m0.053s 00:10:13.574 11:53:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:13.574 11:53:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:13.574 ************************************ 00:10:13.574 END TEST thread_poller_perf 00:10:13.574 ************************************ 00:10:13.574 11:53:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.574 11:53:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:10:13.574 11:53:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:13.574 11:53:38 thread -- common/autotest_common.sh@10 -- # set +x 00:10:13.574 ************************************ 00:10:13.574 START TEST thread_poller_perf 00:10:13.574 ************************************ 00:10:13.574 11:53:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:10:13.574 [2024-12-05 11:53:38.373536] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:13.574 [2024-12-05 11:53:38.373634] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137279 ] 00:10:13.574 [2024-12-05 11:53:38.462347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.574 [2024-12-05 11:53:38.500139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.574 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:10:14.515 [2024-12-05T10:53:39.564Z] ====================================== 00:10:14.515 [2024-12-05T10:53:39.564Z] busy:2401354836 (cyc) 00:10:14.515 [2024-12-05T10:53:39.564Z] total_run_count: 5558000 00:10:14.515 [2024-12-05T10:53:39.564Z] tsc_hz: 2400000000 (cyc) 00:10:14.515 [2024-12-05T10:53:39.564Z] ====================================== 00:10:14.515 [2024-12-05T10:53:39.564Z] poller_cost: 432 (cyc), 180 (nsec) 00:10:14.515 00:10:14.515 real 0m1.176s 00:10:14.515 user 0m1.092s 00:10:14.515 sys 0m0.080s 00:10:14.515 11:53:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.515 11:53:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:10:14.515 ************************************ 00:10:14.515 END TEST thread_poller_perf 00:10:14.515 ************************************ 00:10:14.515 11:53:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:10:14.776 00:10:14.776 real 0m2.670s 00:10:14.776 user 0m2.360s 00:10:14.776 sys 0m0.326s 00:10:14.776 11:53:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.776 11:53:39 thread -- common/autotest_common.sh@10 -- # set +x 00:10:14.776 ************************************ 00:10:14.776 END TEST thread 00:10:14.776 ************************************ 00:10:14.776 11:53:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:10:14.776 11:53:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:14.776 11:53:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:14.776 11:53:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:14.776 11:53:39 -- common/autotest_common.sh@10 -- # set +x 00:10:14.776 ************************************ 00:10:14.776 START TEST app_cmdline 00:10:14.776 ************************************ 00:10:14.776 11:53:39 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:10:14.776 * Looking for test storage... 00:10:14.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:14.776 11:53:39 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:14.776 11:53:39 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:10:14.776 11:53:39 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:15.067 11:53:39 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.067 11:53:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.068 11:53:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.068 --rc genhtml_branch_coverage=1 00:10:15.068 --rc genhtml_function_coverage=1 00:10:15.068 --rc genhtml_legend=1 00:10:15.068 --rc geninfo_all_blocks=1 00:10:15.068 --rc geninfo_unexecuted_blocks=1 00:10:15.068 00:10:15.068 ' 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.068 --rc genhtml_branch_coverage=1 00:10:15.068 --rc genhtml_function_coverage=1 00:10:15.068 --rc genhtml_legend=1 00:10:15.068 --rc geninfo_all_blocks=1 00:10:15.068 --rc geninfo_unexecuted_blocks=1 00:10:15.068 00:10:15.068 ' 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.068 --rc genhtml_branch_coverage=1 00:10:15.068 --rc genhtml_function_coverage=1 00:10:15.068 --rc genhtml_legend=1 00:10:15.068 --rc geninfo_all_blocks=1 00:10:15.068 --rc geninfo_unexecuted_blocks=1 00:10:15.068 00:10:15.068 ' 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.068 --rc genhtml_branch_coverage=1 00:10:15.068 --rc genhtml_function_coverage=1 00:10:15.068 --rc genhtml_legend=1 00:10:15.068 --rc geninfo_all_blocks=1 00:10:15.068 --rc geninfo_unexecuted_blocks=1 00:10:15.068 00:10:15.068 ' 00:10:15.068 11:53:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:15.068 11:53:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1137561 00:10:15.068 11:53:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1137561 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1137561 ']' 00:10:15.068 11:53:39 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.068 11:53:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:15.068 [2024-12-05 11:53:39.910202] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:15.068 [2024-12-05 11:53:39.910275] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1137561 ] 00:10:15.068 [2024-12-05 11:53:40.001572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.068 [2024-12-05 11:53:40.039588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:16.011 { 00:10:16.011 "version": "SPDK v25.01-pre git sha1 688351e0e", 00:10:16.011 "fields": { 00:10:16.011 "major": 25, 00:10:16.011 "minor": 1, 00:10:16.011 "patch": 0, 00:10:16.011 "suffix": "-pre", 00:10:16.011 "commit": "688351e0e" 00:10:16.011 } 00:10:16.011 } 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:16.011 11:53:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:16.011 11:53:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:16.272 request: 00:10:16.272 { 00:10:16.272 "method": "env_dpdk_get_mem_stats", 00:10:16.272 "req_id": 1 00:10:16.272 } 00:10:16.272 Got JSON-RPC error response 00:10:16.272 response: 00:10:16.272 { 00:10:16.272 "code": -32601, 00:10:16.272 "message": "Method not found" 00:10:16.272 } 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:16.272 11:53:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1137561 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1137561 ']' 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1137561 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137561 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137561' 00:10:16.272 killing process with pid 1137561 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 1137561 00:10:16.272 11:53:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 1137561 00:10:16.532 00:10:16.532 real 0m1.727s 00:10:16.532 user 0m2.085s 00:10:16.532 sys 0m0.448s 00:10:16.532 11:53:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.532 11:53:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:10:16.532 ************************************ 00:10:16.532 END TEST app_cmdline 00:10:16.532 ************************************ 00:10:16.532 11:53:41 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:16.532 11:53:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:16.532 11:53:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.532 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:10:16.532 ************************************ 00:10:16.532 START TEST version 00:10:16.532 ************************************ 00:10:16.532 11:53:41 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:10:16.532 * Looking for test storage... 00:10:16.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:16.532 11:53:41 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:16.532 11:53:41 version -- common/autotest_common.sh@1711 -- # lcov --version 00:10:16.532 11:53:41 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:16.793 11:53:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:16.793 11:53:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:16.793 11:53:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:16.793 11:53:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:10:16.793 11:53:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:10:16.793 11:53:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:10:16.793 11:53:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:10:16.793 11:53:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:10:16.793 11:53:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:10:16.793 11:53:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:10:16.793 11:53:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:16.793 11:53:41 version -- scripts/common.sh@344 -- # case "$op" in 00:10:16.793 11:53:41 version -- scripts/common.sh@345 -- # : 1 00:10:16.793 11:53:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:16.793 11:53:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:16.793 11:53:41 version -- scripts/common.sh@365 -- # decimal 1 00:10:16.793 11:53:41 version -- scripts/common.sh@353 -- # local d=1 00:10:16.793 11:53:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:16.793 11:53:41 version -- scripts/common.sh@355 -- # echo 1 00:10:16.793 11:53:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:10:16.793 11:53:41 version -- scripts/common.sh@366 -- # decimal 2 00:10:16.793 11:53:41 version -- scripts/common.sh@353 -- # local d=2 00:10:16.793 11:53:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:16.793 11:53:41 version -- scripts/common.sh@355 -- # echo 2 00:10:16.793 11:53:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:10:16.793 11:53:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:16.793 11:53:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:16.793 11:53:41 version -- scripts/common.sh@368 -- # return 0 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:16.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.793 --rc genhtml_branch_coverage=1 00:10:16.793 --rc genhtml_function_coverage=1 00:10:16.793 --rc genhtml_legend=1 00:10:16.793 --rc geninfo_all_blocks=1 00:10:16.793 --rc geninfo_unexecuted_blocks=1 00:10:16.793 00:10:16.793 ' 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:16.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.793 --rc genhtml_branch_coverage=1 00:10:16.793 --rc genhtml_function_coverage=1 00:10:16.793 --rc genhtml_legend=1 00:10:16.793 --rc geninfo_all_blocks=1 00:10:16.793 --rc geninfo_unexecuted_blocks=1 00:10:16.793 00:10:16.793 ' 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:16.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.793 --rc genhtml_branch_coverage=1 00:10:16.793 --rc genhtml_function_coverage=1 00:10:16.793 --rc genhtml_legend=1 00:10:16.793 --rc geninfo_all_blocks=1 00:10:16.793 --rc geninfo_unexecuted_blocks=1 00:10:16.793 00:10:16.793 ' 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:16.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:16.793 --rc genhtml_branch_coverage=1 00:10:16.793 --rc genhtml_function_coverage=1 00:10:16.793 --rc genhtml_legend=1 00:10:16.793 --rc geninfo_all_blocks=1 00:10:16.793 --rc geninfo_unexecuted_blocks=1 00:10:16.793 00:10:16.793 ' 00:10:16.793 11:53:41 version -- app/version.sh@17 -- # get_header_version major 00:10:16.793 11:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # cut -f2 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.793 11:53:41 version -- app/version.sh@17 -- # major=25 00:10:16.793 11:53:41 version -- app/version.sh@18 -- # get_header_version minor 00:10:16.793 11:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # cut -f2 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.793 11:53:41 version -- app/version.sh@18 -- # minor=1 00:10:16.793 11:53:41 version -- app/version.sh@19 -- # get_header_version patch 00:10:16.793 11:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # cut -f2 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.793 11:53:41 version -- app/version.sh@19 -- # patch=0 00:10:16.793 11:53:41 version -- app/version.sh@20 -- # get_header_version suffix 00:10:16.793 11:53:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # cut -f2 00:10:16.793 11:53:41 version -- app/version.sh@14 -- # tr -d '"' 00:10:16.793 11:53:41 version -- app/version.sh@20 -- # suffix=-pre 00:10:16.793 11:53:41 version -- app/version.sh@22 -- # version=25.1 00:10:16.793 11:53:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:10:16.793 11:53:41 version -- app/version.sh@28 -- # version=25.1rc0 00:10:16.793 11:53:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:16.793 11:53:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:16.793 11:53:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:10:16.793 11:53:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:10:16.793 00:10:16.793 real 0m0.285s 00:10:16.793 user 0m0.162s 00:10:16.793 sys 0m0.173s 00:10:16.793 11:53:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:16.793 11:53:41 version -- common/autotest_common.sh@10 -- # set +x 00:10:16.793 ************************************ 00:10:16.793 END TEST version 00:10:16.793 ************************************ 00:10:16.793 11:53:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:10:16.793 11:53:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:10:16.793 11:53:41 -- spdk/autotest.sh@194 -- # uname -s 00:10:16.793 11:53:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:16.793 11:53:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:16.793 11:53:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:16.793 11:53:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:16.793 11:53:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:16.793 11:53:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:16.793 11:53:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.793 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:10:16.793 11:53:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:16.794 11:53:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:16.794 11:53:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:16.794 11:53:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:16.794 11:53:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:16.794 11:53:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:16.794 11:53:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:16.794 11:53:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:16.794 11:53:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:16.794 11:53:41 -- common/autotest_common.sh@10 -- # set +x 00:10:17.055 ************************************ 00:10:17.055 START TEST nvmf_tcp 00:10:17.055 ************************************ 00:10:17.055 11:53:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:17.055 * Looking for test storage... 00:10:17.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:17.055 11:53:41 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.055 11:53:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.055 11:53:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.055 11:53:42 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.055 --rc genhtml_branch_coverage=1 00:10:17.055 --rc genhtml_function_coverage=1 00:10:17.055 --rc genhtml_legend=1 00:10:17.055 --rc geninfo_all_blocks=1 00:10:17.055 --rc geninfo_unexecuted_blocks=1 00:10:17.055 00:10:17.055 ' 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.055 --rc genhtml_branch_coverage=1 00:10:17.055 --rc genhtml_function_coverage=1 00:10:17.055 --rc genhtml_legend=1 00:10:17.055 --rc geninfo_all_blocks=1 00:10:17.055 --rc geninfo_unexecuted_blocks=1 00:10:17.055 00:10:17.055 ' 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.055 --rc genhtml_branch_coverage=1 00:10:17.055 --rc genhtml_function_coverage=1 00:10:17.055 --rc genhtml_legend=1 00:10:17.055 --rc geninfo_all_blocks=1 00:10:17.055 --rc geninfo_unexecuted_blocks=1 00:10:17.055 00:10:17.055 ' 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.055 --rc genhtml_branch_coverage=1 00:10:17.055 --rc genhtml_function_coverage=1 00:10:17.055 --rc genhtml_legend=1 00:10:17.055 --rc geninfo_all_blocks=1 00:10:17.055 --rc geninfo_unexecuted_blocks=1 00:10:17.055 00:10:17.055 ' 00:10:17.055 11:53:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:17.055 11:53:42 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:17.055 11:53:42 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.055 11:53:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:17.316 ************************************ 00:10:17.316 START TEST nvmf_target_core 00:10:17.316 ************************************ 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:17.316 * Looking for test storage... 00:10:17.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:17.316 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.317 --rc genhtml_branch_coverage=1 00:10:17.317 --rc genhtml_function_coverage=1 00:10:17.317 --rc genhtml_legend=1 00:10:17.317 --rc geninfo_all_blocks=1 00:10:17.317 --rc geninfo_unexecuted_blocks=1 00:10:17.317 00:10:17.317 ' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.317 --rc genhtml_branch_coverage=1 00:10:17.317 --rc genhtml_function_coverage=1 00:10:17.317 --rc genhtml_legend=1 00:10:17.317 --rc geninfo_all_blocks=1 00:10:17.317 --rc geninfo_unexecuted_blocks=1 00:10:17.317 00:10:17.317 ' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.317 --rc genhtml_branch_coverage=1 00:10:17.317 --rc genhtml_function_coverage=1 00:10:17.317 --rc genhtml_legend=1 00:10:17.317 --rc geninfo_all_blocks=1 00:10:17.317 --rc geninfo_unexecuted_blocks=1 00:10:17.317 00:10:17.317 ' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.317 --rc genhtml_branch_coverage=1 00:10:17.317 --rc genhtml_function_coverage=1 00:10:17.317 --rc genhtml_legend=1 00:10:17.317 --rc geninfo_all_blocks=1 00:10:17.317 --rc geninfo_unexecuted_blocks=1 00:10:17.317 00:10:17.317 ' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:17.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.317 11:53:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:17.580 ************************************ 00:10:17.580 START TEST nvmf_abort 00:10:17.580 ************************************ 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:17.580 * Looking for test storage... 00:10:17.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:17.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.580 --rc genhtml_branch_coverage=1 00:10:17.580 --rc genhtml_function_coverage=1 00:10:17.580 --rc genhtml_legend=1 00:10:17.580 --rc geninfo_all_blocks=1 00:10:17.580 --rc geninfo_unexecuted_blocks=1 00:10:17.580 00:10:17.580 ' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:17.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.580 --rc genhtml_branch_coverage=1 00:10:17.580 --rc genhtml_function_coverage=1 00:10:17.580 --rc genhtml_legend=1 00:10:17.580 --rc geninfo_all_blocks=1 00:10:17.580 --rc geninfo_unexecuted_blocks=1 00:10:17.580 00:10:17.580 ' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:17.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.580 --rc genhtml_branch_coverage=1 00:10:17.580 --rc genhtml_function_coverage=1 00:10:17.580 --rc genhtml_legend=1 00:10:17.580 --rc geninfo_all_blocks=1 00:10:17.580 --rc geninfo_unexecuted_blocks=1 00:10:17.580 00:10:17.580 ' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:17.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.580 --rc genhtml_branch_coverage=1 00:10:17.580 --rc genhtml_function_coverage=1 00:10:17.580 --rc genhtml_legend=1 00:10:17.580 --rc geninfo_all_blocks=1 00:10:17.580 --rc geninfo_unexecuted_blocks=1 00:10:17.580 00:10:17.580 ' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:17.580 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.581 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:17.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:10:17.842 11:53:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:25.978 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:25.978 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:25.978 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:25.978 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:25.978 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:25.979 10.0.0.1 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:25.979 10.0.0.2 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:25.979 11:53:49 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:25.979 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:25.979 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.614 ms 00:10:25.979 00:10:25.979 --- 10.0.0.1 ping statistics --- 00:10:25.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.979 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:25.979 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:25.979 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:10:25.979 00:10:25.979 --- 10.0.0.2 ping statistics --- 00:10:25.979 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:25.979 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:25.979 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:10:25.980 ' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=1142069 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 1142069 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1142069 ']' 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:25.980 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:25.980 [2024-12-05 11:53:50.335735] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:25.980 [2024-12-05 11:53:50.335797] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:25.980 [2024-12-05 11:53:50.436656] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:25.980 [2024-12-05 11:53:50.492103] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:25.980 [2024-12-05 11:53:50.492150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:25.980 [2024-12-05 11:53:50.492158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:25.980 [2024-12-05 11:53:50.492165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:25.980 [2024-12-05 11:53:50.492175] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:25.980 [2024-12-05 11:53:50.494016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:25.980 [2024-12-05 11:53:50.494213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:25.980 [2024-12-05 11:53:50.494213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 [2024-12-05 11:53:51.216425] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 Malloc0 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 Delay0 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.253 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.253 [2024-12-05 11:53:51.301691] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:26.513 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.513 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:26.513 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.513 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.513 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.513 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:26.513 [2024-12-05 11:53:51.453139] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:29.055 Initializing NVMe Controllers 00:10:29.055 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:29.055 controller IO queue size 128 less than required 00:10:29.055 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:29.055 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:29.055 Initialization complete. Launching workers. 00:10:29.055 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28474 00:10:29.055 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28539, failed to submit 62 00:10:29.055 success 28478, unsuccessful 61, failed 0 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:29.055 rmmod nvme_tcp 00:10:29.055 rmmod nvme_fabrics 00:10:29.055 rmmod nvme_keyring 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 1142069 ']' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 1142069 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1142069 ']' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1142069 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1142069 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1142069' 00:10:29.055 killing process with pid 1142069 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1142069 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1142069 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:29.055 11:53:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:10:30.969 00:10:30.969 real 0m13.476s 00:10:30.969 user 0m13.875s 00:10:30.969 sys 0m6.745s 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:30.969 ************************************ 00:10:30.969 END TEST nvmf_abort 00:10:30.969 ************************************ 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:30.969 ************************************ 00:10:30.969 START TEST nvmf_ns_hotplug_stress 00:10:30.969 ************************************ 00:10:30.969 11:53:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:31.231 * Looking for test storage... 00:10:31.231 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.231 --rc genhtml_branch_coverage=1 00:10:31.231 --rc genhtml_function_coverage=1 00:10:31.231 --rc genhtml_legend=1 00:10:31.231 --rc geninfo_all_blocks=1 00:10:31.231 --rc geninfo_unexecuted_blocks=1 00:10:31.231 00:10:31.231 ' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.231 --rc genhtml_branch_coverage=1 00:10:31.231 --rc genhtml_function_coverage=1 00:10:31.231 --rc genhtml_legend=1 00:10:31.231 --rc geninfo_all_blocks=1 00:10:31.231 --rc geninfo_unexecuted_blocks=1 00:10:31.231 00:10:31.231 ' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.231 --rc genhtml_branch_coverage=1 00:10:31.231 --rc genhtml_function_coverage=1 00:10:31.231 --rc genhtml_legend=1 00:10:31.231 --rc geninfo_all_blocks=1 00:10:31.231 --rc geninfo_unexecuted_blocks=1 00:10:31.231 00:10:31.231 ' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:31.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.231 --rc genhtml_branch_coverage=1 00:10:31.231 --rc genhtml_function_coverage=1 00:10:31.231 --rc genhtml_legend=1 00:10:31.231 --rc geninfo_all_blocks=1 00:10:31.231 --rc geninfo_unexecuted_blocks=1 00:10:31.231 00:10:31.231 ' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:31.231 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:31.232 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:10:31.232 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.374 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:39.375 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:39.375 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:39.375 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:39.375 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:39.375 10.0.0.1 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:10:39.375 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:39.376 10.0.0.2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:39.376 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.376 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.685 ms 00:10:39.376 00:10:39.376 --- 10.0.0.1 ping statistics --- 00:10:39.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.376 rtt min/avg/max/mdev = 0.685/0.685/0.685/0.000 ms 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:39.376 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.376 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:10:39.376 00:10:39.376 --- 10.0.0.2 ping statistics --- 00:10:39.376 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.376 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:39.376 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:10:39.377 ' 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=1147157 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 1147157 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1147157 ']' 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.377 11:54:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.377 [2024-12-05 11:54:03.893934] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:10:39.377 [2024-12-05 11:54:03.894000] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.377 [2024-12-05 11:54:03.994136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:39.377 [2024-12-05 11:54:04.045613] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.377 [2024-12-05 11:54:04.045662] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.377 [2024-12-05 11:54:04.045671] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.377 [2024-12-05 11:54:04.045678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.377 [2024-12-05 11:54:04.045684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.377 [2024-12-05 11:54:04.047542] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.377 [2024-12-05 11:54:04.047702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.377 [2024-12-05 11:54:04.047703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:39.949 [2024-12-05 11:54:04.929133] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.949 11:54:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:40.209 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.470 [2024-12-05 11:54:05.323987] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.470 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:40.729 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:40.729 Malloc0 00:10:40.729 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:40.989 Delay0 00:10:40.989 11:54:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:41.248 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:41.508 NULL1 00:10:41.508 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:41.508 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1147855 00:10:41.508 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:41.508 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:41.508 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.768 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.089 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:42.089 11:54:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:42.089 true 00:10:42.089 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:42.089 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.349 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.608 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:42.608 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:42.608 true 00:10:42.608 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:42.608 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.869 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.130 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:43.130 11:54:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:43.130 true 00:10:43.130 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:43.130 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.389 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.649 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:43.649 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:43.649 true 00:10:43.649 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:43.649 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.909 11:54:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.170 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:44.170 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:44.170 true 00:10:44.170 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:44.170 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.431 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.691 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:44.691 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:44.691 true 00:10:44.691 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:44.691 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.952 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.212 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:45.212 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:45.472 true 00:10:45.472 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:45.472 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.472 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.733 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:45.733 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:45.994 true 00:10:45.994 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:45.994 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.994 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.272 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:46.272 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:46.535 true 00:10:46.535 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:46.535 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.535 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.795 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:46.795 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:47.055 true 00:10:47.055 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:47.055 11:54:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.317 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.317 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:47.317 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:47.576 true 00:10:47.576 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:47.576 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.837 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:47.837 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:47.837 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:48.098 true 00:10:48.098 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:48.098 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.360 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.360 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:48.360 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:48.621 true 00:10:48.621 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:48.621 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.881 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.160 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:49.160 11:54:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:49.160 true 00:10:49.160 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:49.160 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.470 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.470 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:49.755 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:49.755 true 00:10:49.756 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:49.756 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.016 11:54:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.016 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:50.016 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:50.276 true 00:10:50.276 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:50.276 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.536 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.796 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:50.796 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:50.796 true 00:10:50.796 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:50.796 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.056 11:54:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.318 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:51.318 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:51.318 true 00:10:51.318 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:51.318 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.579 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.840 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:51.840 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:51.840 true 00:10:52.100 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:52.100 11:54:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.100 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.361 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:52.361 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:52.622 true 00:10:52.622 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:52.622 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:52.622 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.884 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:52.884 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:53.145 true 00:10:53.145 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:53.145 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.406 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.406 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:53.406 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:53.667 true 00:10:53.667 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:53.667 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.928 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.928 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:53.928 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:54.188 true 00:10:54.188 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:54.188 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.450 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.711 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:54.711 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:54.711 true 00:10:54.711 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:54.711 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.971 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.231 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:55.231 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:55.231 true 00:10:55.231 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:55.231 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.497 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.757 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:55.757 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:55.757 true 00:10:56.018 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:56.018 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.018 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.279 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:56.279 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:56.540 true 00:10:56.540 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:56.540 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.540 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.800 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:56.800 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:57.059 true 00:10:57.059 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:57.059 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.319 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.319 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:57.319 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:57.578 true 00:10:57.578 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:57.578 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.838 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.838 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:57.838 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:58.098 true 00:10:58.098 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:58.098 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.358 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.619 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:58.619 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:58.619 true 00:10:58.619 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:58.619 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.878 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.139 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:59.139 11:54:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:59.139 true 00:10:59.139 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:59.139 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.406 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.666 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:59.666 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:59.666 true 00:10:59.666 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:10:59.666 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.926 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.185 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:00.185 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:00.185 true 00:11:00.445 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:00.445 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.445 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.708 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:00.708 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:00.968 true 00:11:00.969 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:00.969 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.969 11:54:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.229 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:01.229 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:01.490 true 00:11:01.490 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:01.490 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.752 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.752 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:01.752 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:02.012 true 00:11:02.012 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:02.012 11:54:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.271 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.272 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:02.272 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:02.532 true 00:11:02.532 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:02.532 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.794 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.794 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:02.794 11:54:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:03.054 true 00:11:03.054 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:03.054 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.315 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.575 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:03.575 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:03.575 true 00:11:03.575 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:03.575 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.836 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.097 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:04.097 11:54:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:04.097 true 00:11:04.097 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:04.097 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.358 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.618 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:04.618 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:04.878 true 00:11:04.878 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:04.878 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.878 11:54:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.138 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:05.138 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:05.399 true 00:11:05.399 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:05.400 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.400 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.659 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:05.659 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:05.918 true 00:11:05.918 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:05.918 11:54:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.178 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.178 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:06.178 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:06.438 true 00:11:06.438 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:06.438 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.698 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.959 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:06.959 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:06.959 true 00:11:06.959 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:06.959 11:54:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.219 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.479 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:07.479 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:07.479 true 00:11:07.479 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:07.479 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.739 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.999 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:07.999 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:07.999 true 00:11:08.259 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:08.259 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.259 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.519 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:08.519 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:08.780 true 00:11:08.780 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:08.780 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.780 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.041 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:09.041 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:09.301 true 00:11:09.301 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:09.302 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.564 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.564 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:11:09.564 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:11:09.825 true 00:11:09.825 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:09.825 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.085 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.085 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:11:10.085 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:11:10.345 true 00:11:10.345 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:10.345 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.605 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.605 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:11:10.605 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:11:10.866 true 00:11:10.866 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:10.866 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.128 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.389 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:11:11.389 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:11:11.389 true 00:11:11.389 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:11.389 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.650 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.912 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:11:11.912 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:11:11.912 Initializing NVMe Controllers 00:11:11.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:11.912 Controller IO queue size 128, less than required. 00:11:11.912 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:11.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:11.912 Initialization complete. Launching workers. 00:11:11.912 ======================================================== 00:11:11.912 Latency(us) 00:11:11.912 Device Information : IOPS MiB/s Average min max 00:11:11.912 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 31124.83 15.20 4112.48 1117.64 8020.87 00:11:11.912 ======================================================== 00:11:11.912 Total : 31124.83 15.20 4112.48 1117.64 8020.87 00:11:11.912 00:11:11.912 true 00:11:11.912 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1147855 00:11:11.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1147855) - No such process 00:11:11.912 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1147855 00:11:11.912 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.172 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:12.433 null0 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.433 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:12.692 null1 00:11:12.692 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.692 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.692 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:12.953 null2 00:11:12.953 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:12.953 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:12.953 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:12.953 null3 00:11:13.213 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.213 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.213 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:13.213 null4 00:11:13.213 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.213 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.213 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:13.473 null5 00:11:13.473 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.473 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.473 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:13.473 null6 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:13.733 null7 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.733 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1154864 1154865 1154867 1154869 1154871 1154873 1154875 1154877 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.734 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:13.994 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:13.995 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.320 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.321 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.581 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.842 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.101 11:54:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.101 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.382 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:15.644 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.904 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:15.905 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.905 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:15.905 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:15.905 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:15.905 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.165 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.165 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.165 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.165 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.432 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:16.716 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.995 11:54:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:16.995 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:16.995 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:16.995 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:16.995 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:16.995 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:17.263 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.524 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:17.785 rmmod nvme_tcp 00:11:17.785 rmmod nvme_fabrics 00:11:17.785 rmmod nvme_keyring 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 1147157 ']' 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 1147157 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1147157 ']' 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1147157 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:17.785 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1147157 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1147157' 00:11:18.045 killing process with pid 1147157 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1147157 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1147157 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:18.045 11:54:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:11:20.600 00:11:20.600 real 0m49.103s 00:11:20.600 user 3m20.068s 00:11:20.600 sys 0m17.641s 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:20.600 ************************************ 00:11:20.600 END TEST nvmf_ns_hotplug_stress 00:11:20.600 ************************************ 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:20.600 ************************************ 00:11:20.600 START TEST nvmf_delete_subsystem 00:11:20.600 ************************************ 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:20.600 * Looking for test storage... 00:11:20.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:20.600 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.601 --rc genhtml_branch_coverage=1 00:11:20.601 --rc genhtml_function_coverage=1 00:11:20.601 --rc genhtml_legend=1 00:11:20.601 --rc geninfo_all_blocks=1 00:11:20.601 --rc geninfo_unexecuted_blocks=1 00:11:20.601 00:11:20.601 ' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.601 --rc genhtml_branch_coverage=1 00:11:20.601 --rc genhtml_function_coverage=1 00:11:20.601 --rc genhtml_legend=1 00:11:20.601 --rc geninfo_all_blocks=1 00:11:20.601 --rc geninfo_unexecuted_blocks=1 00:11:20.601 00:11:20.601 ' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.601 --rc genhtml_branch_coverage=1 00:11:20.601 --rc genhtml_function_coverage=1 00:11:20.601 --rc genhtml_legend=1 00:11:20.601 --rc geninfo_all_blocks=1 00:11:20.601 --rc geninfo_unexecuted_blocks=1 00:11:20.601 00:11:20.601 ' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:20.601 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:20.601 --rc genhtml_branch_coverage=1 00:11:20.601 --rc genhtml_function_coverage=1 00:11:20.601 --rc genhtml_legend=1 00:11:20.601 --rc geninfo_all_blocks=1 00:11:20.601 --rc geninfo_unexecuted_blocks=1 00:11:20.601 00:11:20.601 ' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:20.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:11:20.601 11:54:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:11:28.750 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:28.751 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:28.751 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:28.751 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:28.751 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:28.751 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:28.752 10.0.0.1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:28.752 10.0.0.2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:28.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.681 ms 00:11:28.752 00:11:28.752 --- 10.0.0.1 ping statistics --- 00:11:28.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.752 rtt min/avg/max/mdev = 0.681/0.681/0.681/0.000 ms 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:28.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:11:28.752 00:11:28.752 --- 10.0.0.2 ping statistics --- 00:11:28.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.752 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:11:28.752 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:11:28.753 ' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.753 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=1160079 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 1160079 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1160079 ']' 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.753 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:28.753 [2024-12-05 11:54:53.063237] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:11:28.753 [2024-12-05 11:54:53.063302] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.753 [2024-12-05 11:54:53.164807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:28.753 [2024-12-05 11:54:53.216173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.753 [2024-12-05 11:54:53.216228] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.753 [2024-12-05 11:54:53.216237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.753 [2024-12-05 11:54:53.216244] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.753 [2024-12-05 11:54:53.216250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.753 [2024-12-05 11:54:53.217980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.753 [2024-12-05 11:54:53.217983] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 [2024-12-05 11:54:53.934784] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 [2024-12-05 11:54:53.959121] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 NULL1 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 Delay0 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1160429 00:11:29.014 11:54:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:11:29.014 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:29.273 [2024-12-05 11:54:54.086127] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:31.186 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.186 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.187 11:54:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 starting I/O failed: -6 00:11:31.187 starting I/O failed: -6 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 [2024-12-05 11:54:56.213031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879780 is same with the state(6) to be set 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 starting I/O failed: -6 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 [2024-12-05 11:54:56.217803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6ec000c40 is same with the state(6) to be set 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.187 Write completed with error (sct=0, sc=8) 00:11:31.187 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Write completed with error (sct=0, sc=8) 00:11:31.188 Read completed with error (sct=0, sc=8) 00:11:32.571 [2024-12-05 11:54:57.188207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x187a9b0 is same with the state(6) to be set 00:11:32.571 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 [2024-12-05 11:54:57.217925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18792c0 is same with the state(6) to be set 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 [2024-12-05 11:54:57.218243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879960 is same with the state(6) to be set 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 [2024-12-05 11:54:57.219414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6ec00d7c0 is same with the state(6) to be set 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 Read completed with error (sct=0, sc=8) 00:11:32.572 Write completed with error (sct=0, sc=8) 00:11:32.572 [2024-12-05 11:54:57.220123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb6ec00d020 is same with the state(6) to be set 00:11:32.572 Initializing NVMe Controllers 00:11:32.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:32.572 Controller IO queue size 128, less than required. 00:11:32.572 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:32.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:32.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:32.572 Initialization complete. Launching workers. 00:11:32.572 ======================================================== 00:11:32.572 Latency(us) 00:11:32.572 Device Information : IOPS MiB/s Average min max 00:11:32.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.65 0.09 902949.10 456.22 1009553.69 00:11:32.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.80 0.08 944842.57 335.01 2002207.74 00:11:32.572 ======================================================== 00:11:32.572 Total : 343.45 0.17 921952.95 335.01 2002207.74 00:11:32.572 00:11:32.572 [2024-12-05 11:54:57.220860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x187a9b0 (9): Bad file descriptor 00:11:32.572 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:11:32.572 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.572 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:11:32.572 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1160429 00:11:32.572 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1160429 00:11:32.833 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1160429) - No such process 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1160429 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1160429 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1160429 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.833 [2024-12-05 11:54:57.749969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1161114 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:32.833 11:54:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:32.833 [2024-12-05 11:54:57.857366] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:11:33.403 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.403 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:33.403 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:33.973 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:33.973 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:33.973 11:54:58 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.234 11:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.494 11:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:34.494 11:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:34.758 11:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:34.758 11:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:34.758 11:54:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.333 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.333 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:35.333 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.905 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:35.905 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:35.905 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:11:35.905 Initializing NVMe Controllers 00:11:35.905 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:35.905 Controller IO queue size 128, less than required. 00:11:35.905 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:35.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:11:35.905 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:11:35.905 Initialization complete. Launching workers. 00:11:35.905 ======================================================== 00:11:35.905 Latency(us) 00:11:35.905 Device Information : IOPS MiB/s Average min max 00:11:35.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001992.00 1000185.50 1005077.24 00:11:35.905 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003164.96 1000349.74 1007947.95 00:11:35.905 ======================================================== 00:11:35.905 Total : 256.00 0.12 1002578.48 1000185.50 1007947.95 00:11:35.905 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1161114 00:11:36.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1161114) - No such process 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1161114 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:36.475 rmmod nvme_tcp 00:11:36.475 rmmod nvme_fabrics 00:11:36.475 rmmod nvme_keyring 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 1160079 ']' 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 1160079 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1160079 ']' 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1160079 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1160079 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1160079' 00:11:36.475 killing process with pid 1160079 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1160079 00:11:36.475 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1160079 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:36.735 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:38.643 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:11:38.644 00:11:38.644 real 0m18.479s 00:11:38.644 user 0m30.800s 00:11:38.644 sys 0m6.822s 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:11:38.644 ************************************ 00:11:38.644 END TEST nvmf_delete_subsystem 00:11:38.644 ************************************ 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:38.644 11:55:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:38.906 ************************************ 00:11:38.906 START TEST nvmf_host_management 00:11:38.906 ************************************ 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:11:38.906 * Looking for test storage... 00:11:38.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:38.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.906 --rc genhtml_branch_coverage=1 00:11:38.906 --rc genhtml_function_coverage=1 00:11:38.906 --rc genhtml_legend=1 00:11:38.906 --rc geninfo_all_blocks=1 00:11:38.906 --rc geninfo_unexecuted_blocks=1 00:11:38.906 00:11:38.906 ' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:38.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.906 --rc genhtml_branch_coverage=1 00:11:38.906 --rc genhtml_function_coverage=1 00:11:38.906 --rc genhtml_legend=1 00:11:38.906 --rc geninfo_all_blocks=1 00:11:38.906 --rc geninfo_unexecuted_blocks=1 00:11:38.906 00:11:38.906 ' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:38.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.906 --rc genhtml_branch_coverage=1 00:11:38.906 --rc genhtml_function_coverage=1 00:11:38.906 --rc genhtml_legend=1 00:11:38.906 --rc geninfo_all_blocks=1 00:11:38.906 --rc geninfo_unexecuted_blocks=1 00:11:38.906 00:11:38.906 ' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:38.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:38.906 --rc genhtml_branch_coverage=1 00:11:38.906 --rc genhtml_function_coverage=1 00:11:38.906 --rc genhtml_legend=1 00:11:38.906 --rc geninfo_all_blocks=1 00:11:38.906 --rc geninfo_unexecuted_blocks=1 00:11:38.906 00:11:38.906 ' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:38.906 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:38.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:11:38.907 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:47.053 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:47.053 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:47.053 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:47.054 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:47.054 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:47.054 10.0.0.1 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:47.054 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:47.055 10.0.0.2 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:47.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.637 ms 00:11:47.055 00:11:47.055 --- 10.0.0.1 ping statistics --- 00:11:47.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.055 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:47.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:11:47.055 00:11:47.055 --- 10.0.0.2 ping statistics --- 00:11:47.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.055 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:47.055 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:11:47.056 ' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=1166154 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 1166154 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1166154 ']' 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.056 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.056 [2024-12-05 11:55:11.607187] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:11:47.056 [2024-12-05 11:55:11.607253] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.056 [2024-12-05 11:55:11.710598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.056 [2024-12-05 11:55:11.763314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.056 [2024-12-05 11:55:11.763368] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.056 [2024-12-05 11:55:11.763377] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.057 [2024-12-05 11:55:11.763384] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.057 [2024-12-05 11:55:11.763390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.057 [2024-12-05 11:55:11.765548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.057 [2024-12-05 11:55:11.765709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.057 [2024-12-05 11:55:11.765869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:47.057 [2024-12-05 11:55:11.765870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.632 [2024-12-05 11:55:12.479221] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.632 Malloc0 00:11:47.632 [2024-12-05 11:55:12.558864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1166414 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1166414 /var/tmp/bdevperf.sock 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1166414 ']' 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:47.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:47.632 { 00:11:47.632 "params": { 00:11:47.632 "name": "Nvme$subsystem", 00:11:47.632 "trtype": "$TEST_TRANSPORT", 00:11:47.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:47.632 "adrfam": "ipv4", 00:11:47.632 "trsvcid": "$NVMF_PORT", 00:11:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:47.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:47.632 "hdgst": ${hdgst:-false}, 00:11:47.632 "ddgst": ${ddgst:-false} 00:11:47.632 }, 00:11:47.632 "method": "bdev_nvme_attach_controller" 00:11:47.632 } 00:11:47.632 EOF 00:11:47.632 )") 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:11:47.632 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:47.632 "params": { 00:11:47.632 "name": "Nvme0", 00:11:47.632 "trtype": "tcp", 00:11:47.632 "traddr": "10.0.0.2", 00:11:47.632 "adrfam": "ipv4", 00:11:47.632 "trsvcid": "4420", 00:11:47.632 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:47.632 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:47.632 "hdgst": false, 00:11:47.632 "ddgst": false 00:11:47.632 }, 00:11:47.632 "method": "bdev_nvme_attach_controller" 00:11:47.632 }' 00:11:47.632 [2024-12-05 11:55:12.668656] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:11:47.632 [2024-12-05 11:55:12.668724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166414 ] 00:11:47.894 [2024-12-05 11:55:12.765378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.894 [2024-12-05 11:55:12.818780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.154 Running I/O for 10 seconds... 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=700 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 700 -ge 100 ']' 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.725 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.725 [2024-12-05 11:55:13.558492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.558821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1688ab0 is same with the state(6) to be set 00:11:48.726 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.726 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:11:48.726 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.726 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:48.726 [2024-12-05 11:55:13.565464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.726 [2024-12-05 11:55:13.565517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.726 [2024-12-05 11:55:13.565537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.726 [2024-12-05 11:55:13.565553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:11:48.726 [2024-12-05 11:55:13.565570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a9010 is same with the state(6) to be set 00:11:48.726 [2024-12-05 11:55:13.565650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:98048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.565984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.565991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.726 [2024-12-05 11:55:13.566158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.726 [2024-12-05 11:55:13.566166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.566781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:11:48.727 [2024-12-05 11:55:13.566793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:48.727 [2024-12-05 11:55:13.568080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:11:48.727 task offset: 97920 on job bdev=Nvme0n1 fails 00:11:48.727 00:11:48.727 Latency(us) 00:11:48.727 [2024-12-05T10:55:13.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:48.727 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:48.727 Job: Nvme0n1 ended in about 0.53 seconds with error 00:11:48.727 Verification LBA range: start 0x0 length 0x400 00:11:48.727 Nvme0n1 : 0.53 1443.26 90.20 120.74 0.00 39878.61 1774.93 35607.89 00:11:48.727 [2024-12-05T10:55:13.776Z] =================================================================================================================== 00:11:48.727 [2024-12-05T10:55:13.776Z] Total : 1443.26 90.20 120.74 0.00 39878.61 1774.93 35607.89 00:11:48.727 [2024-12-05 11:55:13.570301] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:11:48.727 [2024-12-05 11:55:13.570338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a9010 (9): Bad file descriptor 00:11:48.728 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.728 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:11:48.728 [2024-12-05 11:55:13.616980] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1166414 00:11:49.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1166414) - No such process 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:49.666 { 00:11:49.666 "params": { 00:11:49.666 "name": "Nvme$subsystem", 00:11:49.666 "trtype": "$TEST_TRANSPORT", 00:11:49.666 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:49.666 "adrfam": "ipv4", 00:11:49.666 "trsvcid": "$NVMF_PORT", 00:11:49.666 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:49.666 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:49.666 "hdgst": ${hdgst:-false}, 00:11:49.666 "ddgst": ${ddgst:-false} 00:11:49.666 }, 00:11:49.666 "method": "bdev_nvme_attach_controller" 00:11:49.666 } 00:11:49.666 EOF 00:11:49.666 )") 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:11:49.666 11:55:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:49.666 "params": { 00:11:49.666 "name": "Nvme0", 00:11:49.666 "trtype": "tcp", 00:11:49.666 "traddr": "10.0.0.2", 00:11:49.666 "adrfam": "ipv4", 00:11:49.666 "trsvcid": "4420", 00:11:49.666 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:49.666 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:11:49.666 "hdgst": false, 00:11:49.666 "ddgst": false 00:11:49.666 }, 00:11:49.666 "method": "bdev_nvme_attach_controller" 00:11:49.666 }' 00:11:49.666 [2024-12-05 11:55:14.634716] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:11:49.666 [2024-12-05 11:55:14.634767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1166895 ] 00:11:49.925 [2024-12-05 11:55:14.722738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.925 [2024-12-05 11:55:14.758670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:49.925 Running I/O for 1 seconds... 00:11:51.308 1604.00 IOPS, 100.25 MiB/s 00:11:51.308 Latency(us) 00:11:51.308 [2024-12-05T10:55:16.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:51.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:11:51.308 Verification LBA range: start 0x0 length 0x400 00:11:51.308 Nvme0n1 : 1.01 1646.08 102.88 0.00 0.00 38204.76 5188.27 32331.09 00:11:51.308 [2024-12-05T10:55:16.357Z] =================================================================================================================== 00:11:51.308 [2024-12-05T10:55:16.357Z] Total : 1646.08 102.88 0.00 0.00 38204.76 5188.27 32331.09 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:51.308 rmmod nvme_tcp 00:11:51.308 rmmod nvme_fabrics 00:11:51.308 rmmod nvme_keyring 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 1166154 ']' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 1166154 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1166154 ']' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1166154 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1166154 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1166154' 00:11:51.308 killing process with pid 1166154 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1166154 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1166154 00:11:51.308 [2024-12-05 11:55:16.318703] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:51.308 11:55:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:53.854 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:11:53.855 00:11:53.855 real 0m14.725s 00:11:53.855 user 0m23.006s 00:11:53.855 sys 0m6.882s 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:11:53.855 ************************************ 00:11:53.855 END TEST nvmf_host_management 00:11:53.855 ************************************ 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:53.855 ************************************ 00:11:53.855 START TEST nvmf_lvol 00:11:53.855 ************************************ 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:11:53.855 * Looking for test storage... 00:11:53.855 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:53.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.855 --rc genhtml_branch_coverage=1 00:11:53.855 --rc genhtml_function_coverage=1 00:11:53.855 --rc genhtml_legend=1 00:11:53.855 --rc geninfo_all_blocks=1 00:11:53.855 --rc geninfo_unexecuted_blocks=1 00:11:53.855 00:11:53.855 ' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:53.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.855 --rc genhtml_branch_coverage=1 00:11:53.855 --rc genhtml_function_coverage=1 00:11:53.855 --rc genhtml_legend=1 00:11:53.855 --rc geninfo_all_blocks=1 00:11:53.855 --rc geninfo_unexecuted_blocks=1 00:11:53.855 00:11:53.855 ' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:53.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.855 --rc genhtml_branch_coverage=1 00:11:53.855 --rc genhtml_function_coverage=1 00:11:53.855 --rc genhtml_legend=1 00:11:53.855 --rc geninfo_all_blocks=1 00:11:53.855 --rc geninfo_unexecuted_blocks=1 00:11:53.855 00:11:53.855 ' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:53.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.855 --rc genhtml_branch_coverage=1 00:11:53.855 --rc genhtml_function_coverage=1 00:11:53.855 --rc genhtml_legend=1 00:11:53.855 --rc geninfo_all_blocks=1 00:11:53.855 --rc geninfo_unexecuted_blocks=1 00:11:53.855 00:11:53.855 ' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.855 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:53.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:11:53.856 11:55:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:01.994 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:01.994 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:01.995 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:01.995 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:01.995 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:01.995 11:55:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:01.995 10.0.0.1 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:01.995 10.0.0.2 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:01.995 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:01.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.648 ms 00:12:01.996 00:12:01.996 --- 10.0.0.1 ping statistics --- 00:12:01.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.996 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:01.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:12:01.996 00:12:01.996 --- 10.0.0.2 ping statistics --- 00:12:01.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.996 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:01.996 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:12:01.997 ' 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=1171468 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 1171468 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1171468 ']' 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:01.997 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:01.997 [2024-12-05 11:55:26.489507] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:12:01.997 [2024-12-05 11:55:26.489575] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.997 [2024-12-05 11:55:26.590286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:01.997 [2024-12-05 11:55:26.642888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.997 [2024-12-05 11:55:26.642942] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.997 [2024-12-05 11:55:26.642951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.997 [2024-12-05 11:55:26.642958] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.997 [2024-12-05 11:55:26.642964] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.997 [2024-12-05 11:55:26.644935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.997 [2024-12-05 11:55:26.645092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.997 [2024-12-05 11:55:26.645093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.258 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:02.258 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:12:02.258 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:02.258 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:02.258 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:02.518 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.518 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:02.518 [2024-12-05 11:55:27.514384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.518 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:02.778 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:12:02.778 11:55:27 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:03.038 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:12:03.038 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:12:03.299 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:12:03.560 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b5a39c02-d64d-41de-b3b4-5a7239e12c97 00:12:03.560 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b5a39c02-d64d-41de-b3b4-5a7239e12c97 lvol 20 00:12:03.560 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c54121d8-49a8-46fd-96d9-2d13f855f9b7 00:12:03.560 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:03.820 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c54121d8-49a8-46fd-96d9-2d13f855f9b7 00:12:04.097 11:55:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:04.423 [2024-12-05 11:55:29.159589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.423 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:04.423 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1171985 00:12:04.423 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:12:04.423 11:55:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:12:05.365 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c54121d8-49a8-46fd-96d9-2d13f855f9b7 MY_SNAPSHOT 00:12:05.624 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=22153932-3df9-4e55-b493-109986f48058 00:12:05.624 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c54121d8-49a8-46fd-96d9-2d13f855f9b7 30 00:12:05.884 11:55:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 22153932-3df9-4e55-b493-109986f48058 MY_CLONE 00:12:06.144 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bfb699e6-2099-41ee-973e-fa89ad1ded23 00:12:06.144 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bfb699e6-2099-41ee-973e-fa89ad1ded23 00:12:06.404 11:55:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1171985 00:12:16.402 Initializing NVMe Controllers 00:12:16.402 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:16.402 Controller IO queue size 128, less than required. 00:12:16.402 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:16.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:12:16.402 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:12:16.402 Initialization complete. Launching workers. 00:12:16.402 ======================================================== 00:12:16.402 Latency(us) 00:12:16.402 Device Information : IOPS MiB/s Average min max 00:12:16.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 16142.30 63.06 7933.27 1622.55 45596.14 00:12:16.402 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 16854.50 65.84 7596.37 3300.01 61585.20 00:12:16.402 ======================================================== 00:12:16.402 Total : 32996.80 128.89 7761.18 1622.55 61585.20 00:12:16.402 00:12:16.402 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:16.402 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c54121d8-49a8-46fd-96d9-2d13f855f9b7 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b5a39c02-d64d-41de-b3b4-5a7239e12c97 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:16.402 rmmod nvme_tcp 00:12:16.402 rmmod nvme_fabrics 00:12:16.402 rmmod nvme_keyring 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 1171468 ']' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 1171468 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1171468 ']' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1171468 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1171468 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1171468' 00:12:16.402 killing process with pid 1171468 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1171468 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1171468 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:16.402 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:17.786 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:17.786 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:17.786 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:12:17.787 00:12:17.787 real 0m24.125s 00:12:17.787 user 1m4.733s 00:12:17.787 sys 0m8.918s 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:12:17.787 ************************************ 00:12:17.787 END TEST nvmf_lvol 00:12:17.787 ************************************ 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:17.787 ************************************ 00:12:17.787 START TEST nvmf_lvs_grow 00:12:17.787 ************************************ 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:12:17.787 * Looking for test storage... 00:12:17.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.787 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:18.048 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:18.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.049 --rc genhtml_branch_coverage=1 00:12:18.049 --rc genhtml_function_coverage=1 00:12:18.049 --rc genhtml_legend=1 00:12:18.049 --rc geninfo_all_blocks=1 00:12:18.049 --rc geninfo_unexecuted_blocks=1 00:12:18.049 00:12:18.049 ' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.049 --rc genhtml_branch_coverage=1 00:12:18.049 --rc genhtml_function_coverage=1 00:12:18.049 --rc genhtml_legend=1 00:12:18.049 --rc geninfo_all_blocks=1 00:12:18.049 --rc geninfo_unexecuted_blocks=1 00:12:18.049 00:12:18.049 ' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.049 --rc genhtml_branch_coverage=1 00:12:18.049 --rc genhtml_function_coverage=1 00:12:18.049 --rc genhtml_legend=1 00:12:18.049 --rc geninfo_all_blocks=1 00:12:18.049 --rc geninfo_unexecuted_blocks=1 00:12:18.049 00:12:18.049 ' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:18.049 --rc genhtml_branch_coverage=1 00:12:18.049 --rc genhtml_function_coverage=1 00:12:18.049 --rc genhtml_legend=1 00:12:18.049 --rc geninfo_all_blocks=1 00:12:18.049 --rc geninfo_unexecuted_blocks=1 00:12:18.049 00:12:18.049 ' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:18.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:12:18.049 11:55:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:26.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:26.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:26.187 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:26.188 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:26.188 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:26.188 10.0.0.1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:26.188 10.0.0.2 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:26.188 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:26.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.668 ms 00:12:26.189 00:12:26.189 --- 10.0.0.1 ping statistics --- 00:12:26.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.189 rtt min/avg/max/mdev = 0.668/0.668/0.668/0.000 ms 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:26.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:12:26.189 00:12:26.189 --- 10.0.0.2 ping statistics --- 00:12:26.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.189 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:26.189 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:12:26.190 ' 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=1178602 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 1178602 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1178602 ']' 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.190 11:55:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:26.190 [2024-12-05 11:55:50.710827] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:12:26.190 [2024-12-05 11:55:50.710895] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.190 [2024-12-05 11:55:50.811936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.190 [2024-12-05 11:55:50.863622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.190 [2024-12-05 11:55:50.863679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.190 [2024-12-05 11:55:50.863688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.190 [2024-12-05 11:55:50.863695] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.190 [2024-12-05 11:55:50.863701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.190 [2024-12-05 11:55:50.864488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:26.761 [2024-12-05 11:55:51.740857] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.761 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:27.022 ************************************ 00:12:27.022 START TEST lvs_grow_clean 00:12:27.022 ************************************ 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.022 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:27.022 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:27.022 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:27.283 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:27.283 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:27.283 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:27.545 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:27.545 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:27.545 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 842b8c5c-0a14-4c35-8174-638fc90e402a lvol 150 00:12:27.806 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=bcba0341-1a46-4e50-b5fa-449d14401051 00:12:27.806 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:27.806 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:27.806 [2024-12-05 11:55:52.760988] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:27.806 [2024-12-05 11:55:52.761062] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:27.806 true 00:12:27.806 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:27.806 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:28.067 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:28.067 11:55:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:28.327 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bcba0341-1a46-4e50-b5fa-449d14401051 00:12:28.327 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:28.588 [2024-12-05 11:55:53.475272] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.588 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1179121 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1179121 /var/tmp/bdevperf.sock 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1179121 ']' 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:28.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.849 11:55:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:28.849 [2024-12-05 11:55:53.732956] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:12:28.849 [2024-12-05 11:55:53.733024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179121 ] 00:12:28.849 [2024-12-05 11:55:53.826305] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.849 [2024-12-05 11:55:53.878986] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:29.790 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.790 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:12:29.790 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:30.050 Nvme0n1 00:12:30.050 11:55:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:30.309 [ 00:12:30.309 { 00:12:30.309 "name": "Nvme0n1", 00:12:30.309 "aliases": [ 00:12:30.309 "bcba0341-1a46-4e50-b5fa-449d14401051" 00:12:30.309 ], 00:12:30.309 "product_name": "NVMe disk", 00:12:30.309 "block_size": 4096, 00:12:30.309 "num_blocks": 38912, 00:12:30.309 "uuid": "bcba0341-1a46-4e50-b5fa-449d14401051", 00:12:30.309 "numa_id": 0, 00:12:30.309 "assigned_rate_limits": { 00:12:30.309 "rw_ios_per_sec": 0, 00:12:30.309 "rw_mbytes_per_sec": 0, 00:12:30.309 "r_mbytes_per_sec": 0, 00:12:30.309 "w_mbytes_per_sec": 0 00:12:30.309 }, 00:12:30.309 "claimed": false, 00:12:30.309 "zoned": false, 00:12:30.309 "supported_io_types": { 00:12:30.309 "read": true, 00:12:30.309 "write": true, 00:12:30.309 "unmap": true, 00:12:30.309 "flush": true, 00:12:30.309 "reset": true, 00:12:30.309 "nvme_admin": true, 00:12:30.309 "nvme_io": true, 00:12:30.309 "nvme_io_md": false, 00:12:30.309 "write_zeroes": true, 00:12:30.309 "zcopy": false, 00:12:30.310 "get_zone_info": false, 00:12:30.310 "zone_management": false, 00:12:30.310 "zone_append": false, 00:12:30.310 "compare": true, 00:12:30.310 "compare_and_write": true, 00:12:30.310 "abort": true, 00:12:30.310 "seek_hole": false, 00:12:30.310 "seek_data": false, 00:12:30.310 "copy": true, 00:12:30.310 "nvme_iov_md": false 00:12:30.310 }, 00:12:30.310 "memory_domains": [ 00:12:30.310 { 00:12:30.310 "dma_device_id": "system", 00:12:30.310 "dma_device_type": 1 00:12:30.310 } 00:12:30.310 ], 00:12:30.310 "driver_specific": { 00:12:30.310 "nvme": [ 00:12:30.310 { 00:12:30.310 "trid": { 00:12:30.310 "trtype": "TCP", 00:12:30.310 "adrfam": "IPv4", 00:12:30.310 "traddr": "10.0.0.2", 00:12:30.310 "trsvcid": "4420", 00:12:30.310 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:30.310 }, 00:12:30.310 "ctrlr_data": { 00:12:30.310 "cntlid": 1, 00:12:30.310 "vendor_id": "0x8086", 00:12:30.310 "model_number": "SPDK bdev Controller", 00:12:30.310 "serial_number": "SPDK0", 00:12:30.310 "firmware_revision": "25.01", 00:12:30.310 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:30.310 "oacs": { 00:12:30.310 "security": 0, 00:12:30.310 "format": 0, 00:12:30.310 "firmware": 0, 00:12:30.310 "ns_manage": 0 00:12:30.310 }, 00:12:30.310 "multi_ctrlr": true, 00:12:30.310 "ana_reporting": false 00:12:30.310 }, 00:12:30.310 "vs": { 00:12:30.310 "nvme_version": "1.3" 00:12:30.310 }, 00:12:30.310 "ns_data": { 00:12:30.310 "id": 1, 00:12:30.310 "can_share": true 00:12:30.310 } 00:12:30.310 } 00:12:30.310 ], 00:12:30.310 "mp_policy": "active_passive" 00:12:30.310 } 00:12:30.310 } 00:12:30.310 ] 00:12:30.310 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1179436 00:12:30.310 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:30.310 11:55:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:30.310 Running I/O for 10 seconds... 00:12:31.251 Latency(us) 00:12:31.251 [2024-12-05T10:55:56.300Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:31.251 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:31.251 Nvme0n1 : 1.00 25179.00 98.36 0.00 0.00 0.00 0.00 0.00 00:12:31.251 [2024-12-05T10:55:56.300Z] =================================================================================================================== 00:12:31.251 [2024-12-05T10:55:56.300Z] Total : 25179.00 98.36 0.00 0.00 0.00 0.00 0.00 00:12:31.251 00:12:32.192 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:32.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:32.453 Nvme0n1 : 2.00 25355.50 99.04 0.00 0.00 0.00 0.00 0.00 00:12:32.453 [2024-12-05T10:55:57.502Z] =================================================================================================================== 00:12:32.453 [2024-12-05T10:55:57.502Z] Total : 25355.50 99.04 0.00 0.00 0.00 0.00 0.00 00:12:32.453 00:12:32.453 true 00:12:32.453 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:32.453 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:32.713 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:32.713 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:32.713 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1179436 00:12:33.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:33.284 Nvme0n1 : 3.00 25247.00 98.62 0.00 0.00 0.00 0.00 0.00 00:12:33.284 [2024-12-05T10:55:58.333Z] =================================================================================================================== 00:12:33.284 [2024-12-05T10:55:58.333Z] Total : 25247.00 98.62 0.00 0.00 0.00 0.00 0.00 00:12:33.284 00:12:34.668 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:34.668 Nvme0n1 : 4.00 25197.25 98.43 0.00 0.00 0.00 0.00 0.00 00:12:34.668 [2024-12-05T10:55:59.717Z] =================================================================================================================== 00:12:34.668 [2024-12-05T10:55:59.717Z] Total : 25197.25 98.43 0.00 0.00 0.00 0.00 0.00 00:12:34.668 00:12:35.239 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:35.239 Nvme0n1 : 5.00 25170.60 98.32 0.00 0.00 0.00 0.00 0.00 00:12:35.239 [2024-12-05T10:56:00.288Z] =================================================================================================================== 00:12:35.239 [2024-12-05T10:56:00.288Z] Total : 25170.60 98.32 0.00 0.00 0.00 0.00 0.00 00:12:35.239 00:12:36.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:36.623 Nvme0n1 : 6.00 25152.83 98.25 0.00 0.00 0.00 0.00 0.00 00:12:36.623 [2024-12-05T10:56:01.672Z] =================================================================================================================== 00:12:36.623 [2024-12-05T10:56:01.672Z] Total : 25152.83 98.25 0.00 0.00 0.00 0.00 0.00 00:12:36.623 00:12:37.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:37.565 Nvme0n1 : 7.00 25150.43 98.24 0.00 0.00 0.00 0.00 0.00 00:12:37.565 [2024-12-05T10:56:02.614Z] =================================================================================================================== 00:12:37.565 [2024-12-05T10:56:02.614Z] Total : 25150.43 98.24 0.00 0.00 0.00 0.00 0.00 00:12:37.565 00:12:38.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:38.506 Nvme0n1 : 8.00 25145.62 98.23 0.00 0.00 0.00 0.00 0.00 00:12:38.506 [2024-12-05T10:56:03.555Z] =================================================================================================================== 00:12:38.506 [2024-12-05T10:56:03.555Z] Total : 25145.62 98.23 0.00 0.00 0.00 0.00 0.00 00:12:38.506 00:12:39.445 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:39.446 Nvme0n1 : 9.00 25141.89 98.21 0.00 0.00 0.00 0.00 0.00 00:12:39.446 [2024-12-05T10:56:04.495Z] =================================================================================================================== 00:12:39.446 [2024-12-05T10:56:04.495Z] Total : 25141.89 98.21 0.00 0.00 0.00 0.00 0.00 00:12:39.446 00:12:40.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.382 Nvme0n1 : 10.00 25141.30 98.21 0.00 0.00 0.00 0.00 0.00 00:12:40.382 [2024-12-05T10:56:05.431Z] =================================================================================================================== 00:12:40.382 [2024-12-05T10:56:05.431Z] Total : 25141.30 98.21 0.00 0.00 0.00 0.00 0.00 00:12:40.382 00:12:40.382 00:12:40.382 Latency(us) 00:12:40.382 [2024-12-05T10:56:05.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:40.382 Nvme0n1 : 10.01 25139.52 98.20 0.00 0.00 5087.54 2402.99 13271.04 00:12:40.382 [2024-12-05T10:56:05.431Z] =================================================================================================================== 00:12:40.382 [2024-12-05T10:56:05.431Z] Total : 25139.52 98.20 0.00 0.00 5087.54 2402.99 13271.04 00:12:40.382 { 00:12:40.382 "results": [ 00:12:40.382 { 00:12:40.382 "job": "Nvme0n1", 00:12:40.382 "core_mask": "0x2", 00:12:40.383 "workload": "randwrite", 00:12:40.383 "status": "finished", 00:12:40.383 "queue_depth": 128, 00:12:40.383 "io_size": 4096, 00:12:40.383 "runtime": 10.005162, 00:12:40.383 "iops": 25139.522978238634, 00:12:40.383 "mibps": 98.20126163374466, 00:12:40.383 "io_failed": 0, 00:12:40.383 "io_timeout": 0, 00:12:40.383 "avg_latency_us": 5087.540784362058, 00:12:40.383 "min_latency_us": 2402.9866666666667, 00:12:40.383 "max_latency_us": 13271.04 00:12:40.383 } 00:12:40.383 ], 00:12:40.383 "core_count": 1 00:12:40.383 } 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1179121 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1179121 ']' 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1179121 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1179121 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1179121' 00:12:40.383 killing process with pid 1179121 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1179121 00:12:40.383 Received shutdown signal, test time was about 10.000000 seconds 00:12:40.383 00:12:40.383 Latency(us) 00:12:40.383 [2024-12-05T10:56:05.432Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:40.383 [2024-12-05T10:56:05.432Z] =================================================================================================================== 00:12:40.383 [2024-12-05T10:56:05.432Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:40.383 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1179121 00:12:40.642 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:40.642 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:40.901 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:40.901 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:41.160 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:41.160 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:12:41.160 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:41.160 [2024-12-05 11:56:06.171976] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:41.420 request: 00:12:41.420 { 00:12:41.420 "uuid": "842b8c5c-0a14-4c35-8174-638fc90e402a", 00:12:41.420 "method": "bdev_lvol_get_lvstores", 00:12:41.420 "req_id": 1 00:12:41.420 } 00:12:41.420 Got JSON-RPC error response 00:12:41.420 response: 00:12:41.420 { 00:12:41.420 "code": -19, 00:12:41.420 "message": "No such device" 00:12:41.420 } 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:41.420 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:41.679 aio_bdev 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev bcba0341-1a46-4e50-b5fa-449d14401051 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=bcba0341-1a46-4e50-b5fa-449d14401051 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:41.679 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:41.939 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bcba0341-1a46-4e50-b5fa-449d14401051 -t 2000 00:12:41.939 [ 00:12:41.939 { 00:12:41.939 "name": "bcba0341-1a46-4e50-b5fa-449d14401051", 00:12:41.939 "aliases": [ 00:12:41.939 "lvs/lvol" 00:12:41.939 ], 00:12:41.939 "product_name": "Logical Volume", 00:12:41.939 "block_size": 4096, 00:12:41.939 "num_blocks": 38912, 00:12:41.939 "uuid": "bcba0341-1a46-4e50-b5fa-449d14401051", 00:12:41.939 "assigned_rate_limits": { 00:12:41.939 "rw_ios_per_sec": 0, 00:12:41.939 "rw_mbytes_per_sec": 0, 00:12:41.939 "r_mbytes_per_sec": 0, 00:12:41.939 "w_mbytes_per_sec": 0 00:12:41.939 }, 00:12:41.939 "claimed": false, 00:12:41.939 "zoned": false, 00:12:41.939 "supported_io_types": { 00:12:41.939 "read": true, 00:12:41.939 "write": true, 00:12:41.939 "unmap": true, 00:12:41.939 "flush": false, 00:12:41.939 "reset": true, 00:12:41.939 "nvme_admin": false, 00:12:41.939 "nvme_io": false, 00:12:41.939 "nvme_io_md": false, 00:12:41.939 "write_zeroes": true, 00:12:41.939 "zcopy": false, 00:12:41.939 "get_zone_info": false, 00:12:41.939 "zone_management": false, 00:12:41.939 "zone_append": false, 00:12:41.939 "compare": false, 00:12:41.939 "compare_and_write": false, 00:12:41.939 "abort": false, 00:12:41.939 "seek_hole": true, 00:12:41.939 "seek_data": true, 00:12:41.939 "copy": false, 00:12:41.939 "nvme_iov_md": false 00:12:41.939 }, 00:12:41.939 "driver_specific": { 00:12:41.939 "lvol": { 00:12:41.939 "lvol_store_uuid": "842b8c5c-0a14-4c35-8174-638fc90e402a", 00:12:41.939 "base_bdev": "aio_bdev", 00:12:41.939 "thin_provision": false, 00:12:41.939 "num_allocated_clusters": 38, 00:12:41.939 "snapshot": false, 00:12:41.939 "clone": false, 00:12:41.939 "esnap_clone": false 00:12:41.939 } 00:12:41.939 } 00:12:41.939 } 00:12:41.939 ] 00:12:41.939 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:12:41.939 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:41.939 11:56:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:42.199 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:42.199 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:42.199 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:42.458 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:42.458 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bcba0341-1a46-4e50-b5fa-449d14401051 00:12:42.458 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 842b8c5c-0a14-4c35-8174-638fc90e402a 00:12:42.717 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.977 00:12:42.977 real 0m16.083s 00:12:42.977 user 0m15.648s 00:12:42.977 sys 0m1.541s 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:12:42.977 ************************************ 00:12:42.977 END TEST lvs_grow_clean 00:12:42.977 ************************************ 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:12:42.977 ************************************ 00:12:42.977 START TEST lvs_grow_dirty 00:12:42.977 ************************************ 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:42.977 11:56:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:43.237 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:12:43.237 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:12:43.497 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:43.497 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:12:43.497 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:43.497 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:12:43.497 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:12:43.497 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 lvol 150 00:12:43.757 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=094b9ec3-e133-4270-b387-6194c4c3763d 00:12:43.757 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:12:43.757 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:12:44.017 [2024-12-05 11:56:08.825514] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:12:44.018 [2024-12-05 11:56:08.825554] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:12:44.018 true 00:12:44.018 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:44.018 11:56:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:12:44.018 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:12:44.018 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:44.278 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 094b9ec3-e133-4270-b387-6194c4c3763d 00:12:44.538 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:44.538 [2024-12-05 11:56:09.483419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.538 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1182511 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1182511 /var/tmp/bdevperf.sock 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1182511 ']' 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:44.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:44.798 11:56:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:44.798 [2024-12-05 11:56:09.717488] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:12:44.798 [2024-12-05 11:56:09.717543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182511 ] 00:12:44.798 [2024-12-05 11:56:09.798459] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.798 [2024-12-05 11:56:09.828288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.736 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:45.736 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:45.736 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:12:45.996 Nvme0n1 00:12:45.996 11:56:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:12:45.996 [ 00:12:45.996 { 00:12:45.996 "name": "Nvme0n1", 00:12:45.996 "aliases": [ 00:12:45.996 "094b9ec3-e133-4270-b387-6194c4c3763d" 00:12:45.996 ], 00:12:45.996 "product_name": "NVMe disk", 00:12:45.996 "block_size": 4096, 00:12:45.996 "num_blocks": 38912, 00:12:45.996 "uuid": "094b9ec3-e133-4270-b387-6194c4c3763d", 00:12:45.996 "numa_id": 0, 00:12:45.996 "assigned_rate_limits": { 00:12:45.996 "rw_ios_per_sec": 0, 00:12:45.996 "rw_mbytes_per_sec": 0, 00:12:45.996 "r_mbytes_per_sec": 0, 00:12:45.996 "w_mbytes_per_sec": 0 00:12:45.996 }, 00:12:45.996 "claimed": false, 00:12:45.996 "zoned": false, 00:12:45.996 "supported_io_types": { 00:12:45.996 "read": true, 00:12:45.996 "write": true, 00:12:45.996 "unmap": true, 00:12:45.996 "flush": true, 00:12:45.996 "reset": true, 00:12:45.996 "nvme_admin": true, 00:12:45.996 "nvme_io": true, 00:12:45.996 "nvme_io_md": false, 00:12:45.996 "write_zeroes": true, 00:12:45.996 "zcopy": false, 00:12:45.996 "get_zone_info": false, 00:12:45.996 "zone_management": false, 00:12:45.996 "zone_append": false, 00:12:45.996 "compare": true, 00:12:45.996 "compare_and_write": true, 00:12:45.996 "abort": true, 00:12:45.996 "seek_hole": false, 00:12:45.996 "seek_data": false, 00:12:45.996 "copy": true, 00:12:45.996 "nvme_iov_md": false 00:12:45.996 }, 00:12:45.996 "memory_domains": [ 00:12:45.996 { 00:12:45.996 "dma_device_id": "system", 00:12:45.996 "dma_device_type": 1 00:12:45.996 } 00:12:45.996 ], 00:12:45.996 "driver_specific": { 00:12:45.996 "nvme": [ 00:12:45.996 { 00:12:45.996 "trid": { 00:12:45.996 "trtype": "TCP", 00:12:45.996 "adrfam": "IPv4", 00:12:45.996 "traddr": "10.0.0.2", 00:12:45.996 "trsvcid": "4420", 00:12:45.996 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:12:45.996 }, 00:12:45.996 "ctrlr_data": { 00:12:45.996 "cntlid": 1, 00:12:45.996 "vendor_id": "0x8086", 00:12:45.996 "model_number": "SPDK bdev Controller", 00:12:45.996 "serial_number": "SPDK0", 00:12:45.996 "firmware_revision": "25.01", 00:12:45.996 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:12:45.996 "oacs": { 00:12:45.996 "security": 0, 00:12:45.996 "format": 0, 00:12:45.996 "firmware": 0, 00:12:45.996 "ns_manage": 0 00:12:45.996 }, 00:12:45.996 "multi_ctrlr": true, 00:12:45.996 "ana_reporting": false 00:12:45.996 }, 00:12:45.996 "vs": { 00:12:45.996 "nvme_version": "1.3" 00:12:45.996 }, 00:12:45.996 "ns_data": { 00:12:45.996 "id": 1, 00:12:45.996 "can_share": true 00:12:45.996 } 00:12:45.996 } 00:12:45.996 ], 00:12:45.996 "mp_policy": "active_passive" 00:12:45.996 } 00:12:45.996 } 00:12:45.996 ] 00:12:45.996 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1182714 00:12:45.996 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:12:45.996 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:46.256 Running I/O for 10 seconds... 00:12:47.346 Latency(us) 00:12:47.346 [2024-12-05T10:56:12.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:47.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:47.346 Nvme0n1 : 1.00 25299.00 98.82 0.00 0.00 0.00 0.00 0.00 00:12:47.346 [2024-12-05T10:56:12.395Z] =================================================================================================================== 00:12:47.346 [2024-12-05T10:56:12.395Z] Total : 25299.00 98.82 0.00 0.00 0.00 0.00 0.00 00:12:47.346 00:12:48.286 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:48.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:48.286 Nvme0n1 : 2.00 25448.50 99.41 0.00 0.00 0.00 0.00 0.00 00:12:48.286 [2024-12-05T10:56:13.335Z] =================================================================================================================== 00:12:48.286 [2024-12-05T10:56:13.335Z] Total : 25448.50 99.41 0.00 0.00 0.00 0.00 0.00 00:12:48.286 00:12:48.286 true 00:12:48.286 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:48.286 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:12:48.546 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:12:48.546 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:12:48.546 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1182714 00:12:49.117 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:49.117 Nvme0n1 : 3.00 25541.67 99.77 0.00 0.00 0.00 0.00 0.00 00:12:49.117 [2024-12-05T10:56:14.166Z] =================================================================================================================== 00:12:49.117 [2024-12-05T10:56:14.166Z] Total : 25541.67 99.77 0.00 0.00 0.00 0.00 0.00 00:12:49.117 00:12:50.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:50.059 Nvme0n1 : 4.00 25588.00 99.95 0.00 0.00 0.00 0.00 0.00 00:12:50.059 [2024-12-05T10:56:15.108Z] =================================================================================================================== 00:12:50.059 [2024-12-05T10:56:15.108Z] Total : 25588.00 99.95 0.00 0.00 0.00 0.00 0.00 00:12:50.059 00:12:51.453 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:51.453 Nvme0n1 : 5.00 25628.20 100.11 0.00 0.00 0.00 0.00 0.00 00:12:51.453 [2024-12-05T10:56:16.502Z] =================================================================================================================== 00:12:51.453 [2024-12-05T10:56:16.502Z] Total : 25628.20 100.11 0.00 0.00 0.00 0.00 0.00 00:12:51.453 00:12:52.394 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:52.394 Nvme0n1 : 6.00 25666.00 100.26 0.00 0.00 0.00 0.00 0.00 00:12:52.394 [2024-12-05T10:56:17.443Z] =================================================================================================================== 00:12:52.394 [2024-12-05T10:56:17.443Z] Total : 25666.00 100.26 0.00 0.00 0.00 0.00 0.00 00:12:52.394 00:12:53.080 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:53.080 Nvme0n1 : 7.00 25682.86 100.32 0.00 0.00 0.00 0.00 0.00 00:12:53.080 [2024-12-05T10:56:18.129Z] =================================================================================================================== 00:12:53.080 [2024-12-05T10:56:18.129Z] Total : 25682.86 100.32 0.00 0.00 0.00 0.00 0.00 00:12:53.080 00:12:54.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:54.465 Nvme0n1 : 8.00 25703.88 100.41 0.00 0.00 0.00 0.00 0.00 00:12:54.465 [2024-12-05T10:56:19.514Z] =================================================================================================================== 00:12:54.465 [2024-12-05T10:56:19.514Z] Total : 25703.88 100.41 0.00 0.00 0.00 0.00 0.00 00:12:54.465 00:12:55.404 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:55.404 Nvme0n1 : 9.00 25712.89 100.44 0.00 0.00 0.00 0.00 0.00 00:12:55.404 [2024-12-05T10:56:20.453Z] =================================================================================================================== 00:12:55.404 [2024-12-05T10:56:20.453Z] Total : 25712.89 100.44 0.00 0.00 0.00 0.00 0.00 00:12:55.404 00:12:56.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.346 Nvme0n1 : 10.00 25727.00 100.50 0.00 0.00 0.00 0.00 0.00 00:12:56.346 [2024-12-05T10:56:21.395Z] =================================================================================================================== 00:12:56.346 [2024-12-05T10:56:21.395Z] Total : 25727.00 100.50 0.00 0.00 0.00 0.00 0.00 00:12:56.346 00:12:56.346 00:12:56.346 Latency(us) 00:12:56.346 [2024-12-05T10:56:21.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:12:56.346 Nvme0n1 : 10.00 25725.03 100.49 0.00 0.00 4972.64 3031.04 13161.81 00:12:56.346 [2024-12-05T10:56:21.395Z] =================================================================================================================== 00:12:56.346 [2024-12-05T10:56:21.395Z] Total : 25725.03 100.49 0.00 0.00 4972.64 3031.04 13161.81 00:12:56.346 { 00:12:56.346 "results": [ 00:12:56.346 { 00:12:56.346 "job": "Nvme0n1", 00:12:56.346 "core_mask": "0x2", 00:12:56.346 "workload": "randwrite", 00:12:56.346 "status": "finished", 00:12:56.346 "queue_depth": 128, 00:12:56.346 "io_size": 4096, 00:12:56.346 "runtime": 10.003293, 00:12:56.346 "iops": 25725.028748033274, 00:12:56.346 "mibps": 100.48839354700497, 00:12:56.346 "io_failed": 0, 00:12:56.346 "io_timeout": 0, 00:12:56.346 "avg_latency_us": 4972.639926425347, 00:12:56.346 "min_latency_us": 3031.04, 00:12:56.346 "max_latency_us": 13161.813333333334 00:12:56.346 } 00:12:56.346 ], 00:12:56.346 "core_count": 1 00:12:56.346 } 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1182511 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1182511 ']' 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1182511 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1182511 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1182511' 00:12:56.346 killing process with pid 1182511 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1182511 00:12:56.346 Received shutdown signal, test time was about 10.000000 seconds 00:12:56.346 00:12:56.346 Latency(us) 00:12:56.346 [2024-12-05T10:56:21.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:56.346 [2024-12-05T10:56:21.395Z] =================================================================================================================== 00:12:56.346 [2024-12-05T10:56:21.395Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1182511 00:12:56.346 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:56.605 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:56.605 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:56.605 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:12:56.865 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:12:56.865 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:12:56.865 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1178602 00:12:56.865 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1178602 00:12:56.865 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1178602 Killed "${NVMF_APP[@]}" "$@" 00:12:56.865 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=1184885 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 1184885 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1184885 ']' 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:56.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:56.866 11:56:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:56.866 [2024-12-05 11:56:21.893817] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:12:56.866 [2024-12-05 11:56:21.893869] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.126 [2024-12-05 11:56:21.984222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.126 [2024-12-05 11:56:22.013144] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.126 [2024-12-05 11:56:22.013170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.126 [2024-12-05 11:56:22.013175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.126 [2024-12-05 11:56:22.013180] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.126 [2024-12-05 11:56:22.013184] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.126 [2024-12-05 11:56:22.013628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.696 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:57.957 [2024-12-05 11:56:22.876217] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:12:57.957 [2024-12-05 11:56:22.876292] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:12:57.957 [2024-12-05 11:56:22.876314] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 094b9ec3-e133-4270-b387-6194c4c3763d 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=094b9ec3-e133-4270-b387-6194c4c3763d 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.957 11:56:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:58.218 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 094b9ec3-e133-4270-b387-6194c4c3763d -t 2000 00:12:58.218 [ 00:12:58.218 { 00:12:58.218 "name": "094b9ec3-e133-4270-b387-6194c4c3763d", 00:12:58.218 "aliases": [ 00:12:58.218 "lvs/lvol" 00:12:58.218 ], 00:12:58.218 "product_name": "Logical Volume", 00:12:58.218 "block_size": 4096, 00:12:58.218 "num_blocks": 38912, 00:12:58.218 "uuid": "094b9ec3-e133-4270-b387-6194c4c3763d", 00:12:58.218 "assigned_rate_limits": { 00:12:58.218 "rw_ios_per_sec": 0, 00:12:58.218 "rw_mbytes_per_sec": 0, 00:12:58.218 "r_mbytes_per_sec": 0, 00:12:58.218 "w_mbytes_per_sec": 0 00:12:58.218 }, 00:12:58.218 "claimed": false, 00:12:58.218 "zoned": false, 00:12:58.218 "supported_io_types": { 00:12:58.218 "read": true, 00:12:58.218 "write": true, 00:12:58.218 "unmap": true, 00:12:58.218 "flush": false, 00:12:58.218 "reset": true, 00:12:58.218 "nvme_admin": false, 00:12:58.218 "nvme_io": false, 00:12:58.218 "nvme_io_md": false, 00:12:58.218 "write_zeroes": true, 00:12:58.218 "zcopy": false, 00:12:58.218 "get_zone_info": false, 00:12:58.218 "zone_management": false, 00:12:58.218 "zone_append": false, 00:12:58.218 "compare": false, 00:12:58.218 "compare_and_write": false, 00:12:58.218 "abort": false, 00:12:58.218 "seek_hole": true, 00:12:58.218 "seek_data": true, 00:12:58.218 "copy": false, 00:12:58.218 "nvme_iov_md": false 00:12:58.218 }, 00:12:58.218 "driver_specific": { 00:12:58.218 "lvol": { 00:12:58.218 "lvol_store_uuid": "2614b038-ee42-4c04-a20d-5f5dc08373a4", 00:12:58.218 "base_bdev": "aio_bdev", 00:12:58.218 "thin_provision": false, 00:12:58.218 "num_allocated_clusters": 38, 00:12:58.218 "snapshot": false, 00:12:58.218 "clone": false, 00:12:58.218 "esnap_clone": false 00:12:58.218 } 00:12:58.218 } 00:12:58.218 } 00:12:58.218 ] 00:12:58.218 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:58.218 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:58.218 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:12:58.479 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:12:58.479 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:58.479 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:12:58.739 [2024-12-05 11:56:23.720926] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:58.739 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:59.001 request: 00:12:59.001 { 00:12:59.001 "uuid": "2614b038-ee42-4c04-a20d-5f5dc08373a4", 00:12:59.001 "method": "bdev_lvol_get_lvstores", 00:12:59.001 "req_id": 1 00:12:59.001 } 00:12:59.001 Got JSON-RPC error response 00:12:59.001 response: 00:12:59.001 { 00:12:59.001 "code": -19, 00:12:59.001 "message": "No such device" 00:12:59.001 } 00:12:59.001 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:12:59.001 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:59.001 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:59.001 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:59.001 11:56:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:12:59.263 aio_bdev 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 094b9ec3-e133-4270-b387-6194c4c3763d 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=094b9ec3-e133-4270-b387-6194c4c3763d 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:12:59.263 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 094b9ec3-e133-4270-b387-6194c4c3763d -t 2000 00:12:59.524 [ 00:12:59.524 { 00:12:59.524 "name": "094b9ec3-e133-4270-b387-6194c4c3763d", 00:12:59.524 "aliases": [ 00:12:59.524 "lvs/lvol" 00:12:59.524 ], 00:12:59.524 "product_name": "Logical Volume", 00:12:59.524 "block_size": 4096, 00:12:59.524 "num_blocks": 38912, 00:12:59.524 "uuid": "094b9ec3-e133-4270-b387-6194c4c3763d", 00:12:59.524 "assigned_rate_limits": { 00:12:59.524 "rw_ios_per_sec": 0, 00:12:59.524 "rw_mbytes_per_sec": 0, 00:12:59.524 "r_mbytes_per_sec": 0, 00:12:59.524 "w_mbytes_per_sec": 0 00:12:59.524 }, 00:12:59.524 "claimed": false, 00:12:59.524 "zoned": false, 00:12:59.524 "supported_io_types": { 00:12:59.524 "read": true, 00:12:59.524 "write": true, 00:12:59.524 "unmap": true, 00:12:59.524 "flush": false, 00:12:59.524 "reset": true, 00:12:59.524 "nvme_admin": false, 00:12:59.524 "nvme_io": false, 00:12:59.524 "nvme_io_md": false, 00:12:59.524 "write_zeroes": true, 00:12:59.524 "zcopy": false, 00:12:59.524 "get_zone_info": false, 00:12:59.524 "zone_management": false, 00:12:59.524 "zone_append": false, 00:12:59.524 "compare": false, 00:12:59.524 "compare_and_write": false, 00:12:59.524 "abort": false, 00:12:59.524 "seek_hole": true, 00:12:59.524 "seek_data": true, 00:12:59.524 "copy": false, 00:12:59.524 "nvme_iov_md": false 00:12:59.524 }, 00:12:59.524 "driver_specific": { 00:12:59.524 "lvol": { 00:12:59.524 "lvol_store_uuid": "2614b038-ee42-4c04-a20d-5f5dc08373a4", 00:12:59.524 "base_bdev": "aio_bdev", 00:12:59.524 "thin_provision": false, 00:12:59.524 "num_allocated_clusters": 38, 00:12:59.524 "snapshot": false, 00:12:59.524 "clone": false, 00:12:59.524 "esnap_clone": false 00:12:59.524 } 00:12:59.524 } 00:12:59.524 } 00:12:59.524 ] 00:12:59.524 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:12:59.524 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:59.524 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:12:59.786 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:12:59.786 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:12:59.786 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:12:59.786 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:12:59.786 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 094b9ec3-e133-4270-b387-6194c4c3763d 00:13:00.046 11:56:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2614b038-ee42-4c04-a20d-5f5dc08373a4 00:13:00.305 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:13:00.305 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:13:00.305 00:13:00.305 real 0m17.376s 00:13:00.305 user 0m45.838s 00:13:00.305 sys 0m2.997s 00:13:00.305 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.305 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:13:00.305 ************************************ 00:13:00.305 END TEST lvs_grow_dirty 00:13:00.305 ************************************ 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:13:00.564 nvmf_trace.0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:00.564 rmmod nvme_tcp 00:13:00.564 rmmod nvme_fabrics 00:13:00.564 rmmod nvme_keyring 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 1184885 ']' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 1184885 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1184885 ']' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1184885 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1184885 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1184885' 00:13:00.564 killing process with pid 1184885 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1184885 00:13:00.564 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1184885 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:00.837 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:13:02.748 00:13:02.748 real 0m45.027s 00:13:02.748 user 1m7.875s 00:13:02.748 sys 0m10.825s 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:13:02.748 ************************************ 00:13:02.748 END TEST nvmf_lvs_grow 00:13:02.748 ************************************ 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:02.748 11:56:27 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:03.010 ************************************ 00:13:03.010 START TEST nvmf_bdev_io_wait 00:13:03.010 ************************************ 00:13:03.010 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:13:03.010 * Looking for test storage... 00:13:03.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.010 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:03.010 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:13:03.010 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:03.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.010 --rc genhtml_branch_coverage=1 00:13:03.010 --rc genhtml_function_coverage=1 00:13:03.010 --rc genhtml_legend=1 00:13:03.010 --rc geninfo_all_blocks=1 00:13:03.010 --rc geninfo_unexecuted_blocks=1 00:13:03.010 00:13:03.010 ' 00:13:03.010 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:03.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.011 --rc genhtml_branch_coverage=1 00:13:03.011 --rc genhtml_function_coverage=1 00:13:03.011 --rc genhtml_legend=1 00:13:03.011 --rc geninfo_all_blocks=1 00:13:03.011 --rc geninfo_unexecuted_blocks=1 00:13:03.011 00:13:03.011 ' 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.011 --rc genhtml_branch_coverage=1 00:13:03.011 --rc genhtml_function_coverage=1 00:13:03.011 --rc genhtml_legend=1 00:13:03.011 --rc geninfo_all_blocks=1 00:13:03.011 --rc geninfo_unexecuted_blocks=1 00:13:03.011 00:13:03.011 ' 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:03.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:03.011 --rc genhtml_branch_coverage=1 00:13:03.011 --rc genhtml_function_coverage=1 00:13:03.011 --rc genhtml_legend=1 00:13:03.011 --rc geninfo_all_blocks=1 00:13:03.011 --rc geninfo_unexecuted_blocks=1 00:13:03.011 00:13:03.011 ' 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.011 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:03.273 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.273 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:03.273 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.273 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:03.273 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:03.273 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:03.274 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:13:03.274 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:11.414 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:11.414 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:11.414 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:11.415 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:11.415 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:11.415 10.0.0.1 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:11.415 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:11.416 10.0.0.2 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:11.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:11.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.664 ms 00:13:11.416 00:13:11.416 --- 10.0.0.1 ping statistics --- 00:13:11.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.416 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:11.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:11.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.282 ms 00:13:11.416 00:13:11.416 --- 10.0.0.2 ping statistics --- 00:13:11.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:11.416 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:11.416 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:13:11.417 ' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=1189992 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 1189992 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:13:11.417 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1189992 ']' 00:13:11.418 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:11.418 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:11.418 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:11.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:11.418 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:11.418 11:56:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.418 [2024-12-05 11:56:35.776511] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:11.418 [2024-12-05 11:56:35.776572] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:11.418 [2024-12-05 11:56:35.879943] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:11.418 [2024-12-05 11:56:35.933911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:11.418 [2024-12-05 11:56:35.933968] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:11.418 [2024-12-05 11:56:35.933977] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:11.418 [2024-12-05 11:56:35.933984] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:11.418 [2024-12-05 11:56:35.933990] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:11.418 [2024-12-05 11:56:35.936070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:11.418 [2024-12-05 11:56:35.936235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:11.418 [2024-12-05 11:56:35.936396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.418 [2024-12-05 11:56:35.936397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.679 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 [2024-12-05 11:56:36.731777] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 Malloc0 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:11.940 [2024-12-05 11:56:36.797408] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1190181 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1190183 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:11.940 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:11.940 { 00:13:11.940 "params": { 00:13:11.940 "name": "Nvme$subsystem", 00:13:11.940 "trtype": "$TEST_TRANSPORT", 00:13:11.940 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.940 "adrfam": "ipv4", 00:13:11.940 "trsvcid": "$NVMF_PORT", 00:13:11.940 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.940 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.940 "hdgst": ${hdgst:-false}, 00:13:11.940 "ddgst": ${ddgst:-false} 00:13:11.940 }, 00:13:11.940 "method": "bdev_nvme_attach_controller" 00:13:11.940 } 00:13:11.940 EOF 00:13:11.940 )") 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1190186 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:11.941 { 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme$subsystem", 00:13:11.941 "trtype": "$TEST_TRANSPORT", 00:13:11.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "$NVMF_PORT", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.941 "hdgst": ${hdgst:-false}, 00:13:11.941 "ddgst": ${ddgst:-false} 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 } 00:13:11.941 EOF 00:13:11.941 )") 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1190190 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:11.941 { 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme$subsystem", 00:13:11.941 "trtype": "$TEST_TRANSPORT", 00:13:11.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "$NVMF_PORT", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.941 "hdgst": ${hdgst:-false}, 00:13:11.941 "ddgst": ${ddgst:-false} 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 } 00:13:11.941 EOF 00:13:11.941 )") 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:11.941 { 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme$subsystem", 00:13:11.941 "trtype": "$TEST_TRANSPORT", 00:13:11.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "$NVMF_PORT", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:11.941 "hdgst": ${hdgst:-false}, 00:13:11.941 "ddgst": ${ddgst:-false} 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 } 00:13:11.941 EOF 00:13:11.941 )") 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1190181 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme1", 00:13:11.941 "trtype": "tcp", 00:13:11.941 "traddr": "10.0.0.2", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "4420", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.941 "hdgst": false, 00:13:11.941 "ddgst": false 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 }' 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme1", 00:13:11.941 "trtype": "tcp", 00:13:11.941 "traddr": "10.0.0.2", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "4420", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.941 "hdgst": false, 00:13:11.941 "ddgst": false 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 }' 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme1", 00:13:11.941 "trtype": "tcp", 00:13:11.941 "traddr": "10.0.0.2", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "4420", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.941 "hdgst": false, 00:13:11.941 "ddgst": false 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 }' 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:13:11.941 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:11.941 "params": { 00:13:11.941 "name": "Nvme1", 00:13:11.941 "trtype": "tcp", 00:13:11.941 "traddr": "10.0.0.2", 00:13:11.941 "adrfam": "ipv4", 00:13:11.941 "trsvcid": "4420", 00:13:11.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:11.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:11.941 "hdgst": false, 00:13:11.941 "ddgst": false 00:13:11.941 }, 00:13:11.941 "method": "bdev_nvme_attach_controller" 00:13:11.941 }' 00:13:11.941 [2024-12-05 11:56:36.855947] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:11.941 [2024-12-05 11:56:36.856019] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:13:11.941 [2024-12-05 11:56:36.858316] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:11.941 [2024-12-05 11:56:36.858387] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:13:11.941 [2024-12-05 11:56:36.859942] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:11.941 [2024-12-05 11:56:36.860003] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:13:11.941 [2024-12-05 11:56:36.861217] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:11.941 [2024-12-05 11:56:36.861288] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:13:12.202 [2024-12-05 11:56:37.083589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.202 [2024-12-05 11:56:37.124502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:12.202 [2024-12-05 11:56:37.173975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.202 [2024-12-05 11:56:37.212818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:12.463 [2024-12-05 11:56:37.261153] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.463 [2024-12-05 11:56:37.300249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:12.463 [2024-12-05 11:56:37.308767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:12.463 [2024-12-05 11:56:37.348635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:13:12.463 Running I/O for 1 seconds... 00:13:12.463 Running I/O for 1 seconds... 00:13:12.732 Running I/O for 1 seconds... 00:13:12.732 Running I/O for 1 seconds... 00:13:13.673 10801.00 IOPS, 42.19 MiB/s 00:13:13.673 Latency(us) 00:13:13.673 [2024-12-05T10:56:38.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.673 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:13:13.673 Nvme1n1 : 1.01 10855.00 42.40 0.00 0.00 11744.99 6198.61 17585.49 00:13:13.673 [2024-12-05T10:56:38.722Z] =================================================================================================================== 00:13:13.673 [2024-12-05T10:56:38.722Z] Total : 10855.00 42.40 0.00 0.00 11744.99 6198.61 17585.49 00:13:13.673 10822.00 IOPS, 42.27 MiB/s 00:13:13.673 Latency(us) 00:13:13.673 [2024-12-05T10:56:38.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.673 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:13:13.673 Nvme1n1 : 1.01 10896.87 42.57 0.00 0.00 11706.02 5188.27 21299.20 00:13:13.673 [2024-12-05T10:56:38.722Z] =================================================================================================================== 00:13:13.673 [2024-12-05T10:56:38.722Z] Total : 10896.87 42.57 0.00 0.00 11706.02 5188.27 21299.20 00:13:13.673 178328.00 IOPS, 696.59 MiB/s 00:13:13.673 Latency(us) 00:13:13.673 [2024-12-05T10:56:38.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.673 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:13:13.673 Nvme1n1 : 1.00 177957.13 695.15 0.00 0.00 715.14 302.08 2048.00 00:13:13.673 [2024-12-05T10:56:38.722Z] =================================================================================================================== 00:13:13.673 [2024-12-05T10:56:38.722Z] Total : 177957.13 695.15 0.00 0.00 715.14 302.08 2048.00 00:13:13.673 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1190183 00:13:13.673 9670.00 IOPS, 37.77 MiB/s 00:13:13.673 Latency(us) 00:13:13.673 [2024-12-05T10:56:38.722Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.673 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:13:13.673 Nvme1n1 : 1.01 9733.88 38.02 0.00 0.00 13102.92 5488.64 23156.05 00:13:13.673 [2024-12-05T10:56:38.722Z] =================================================================================================================== 00:13:13.673 [2024-12-05T10:56:38.722Z] Total : 9733.88 38.02 0.00 0.00 13102.92 5488.64 23156.05 00:13:13.673 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1190186 00:13:13.673 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1190190 00:13:13.673 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.673 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:13.673 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:13.933 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:13.934 rmmod nvme_tcp 00:13:13.934 rmmod nvme_fabrics 00:13:13.934 rmmod nvme_keyring 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 1189992 ']' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 1189992 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1189992 ']' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1189992 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1189992 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1189992' 00:13:13.934 killing process with pid 1189992 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1189992 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1189992 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:13.934 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:13:16.476 00:13:16.476 real 0m13.228s 00:13:16.476 user 0m19.501s 00:13:16.476 sys 0m7.703s 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:13:16.476 ************************************ 00:13:16.476 END TEST nvmf_bdev_io_wait 00:13:16.476 ************************************ 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:16.476 ************************************ 00:13:16.476 START TEST nvmf_queue_depth 00:13:16.476 ************************************ 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:13:16.476 * Looking for test storage... 00:13:16.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:13:16.476 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:16.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.477 --rc genhtml_branch_coverage=1 00:13:16.477 --rc genhtml_function_coverage=1 00:13:16.477 --rc genhtml_legend=1 00:13:16.477 --rc geninfo_all_blocks=1 00:13:16.477 --rc geninfo_unexecuted_blocks=1 00:13:16.477 00:13:16.477 ' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:16.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.477 --rc genhtml_branch_coverage=1 00:13:16.477 --rc genhtml_function_coverage=1 00:13:16.477 --rc genhtml_legend=1 00:13:16.477 --rc geninfo_all_blocks=1 00:13:16.477 --rc geninfo_unexecuted_blocks=1 00:13:16.477 00:13:16.477 ' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:16.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.477 --rc genhtml_branch_coverage=1 00:13:16.477 --rc genhtml_function_coverage=1 00:13:16.477 --rc genhtml_legend=1 00:13:16.477 --rc geninfo_all_blocks=1 00:13:16.477 --rc geninfo_unexecuted_blocks=1 00:13:16.477 00:13:16.477 ' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:16.477 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:16.477 --rc genhtml_branch_coverage=1 00:13:16.477 --rc genhtml_function_coverage=1 00:13:16.477 --rc genhtml_legend=1 00:13:16.477 --rc geninfo_all_blocks=1 00:13:16.477 --rc geninfo_unexecuted_blocks=1 00:13:16.477 00:13:16.477 ' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:16.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:13:16.477 11:56:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:24.617 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:24.617 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:24.617 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:24.617 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:24.617 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:24.618 10.0.0.1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:24.618 10.0.0.2 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:24.618 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:24.618 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:24.618 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.561 ms 00:13:24.618 00:13:24.618 --- 10.0.0.1 ping statistics --- 00:13:24.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.619 rtt min/avg/max/mdev = 0.561/0.561/0.561/0.000 ms 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:24.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:24.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:13:24.619 00:13:24.619 --- 10.0.0.2 ping statistics --- 00:13:24.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:24.619 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:13:24.619 ' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:24.619 11:56:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.619 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=1194894 00:13:24.619 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 1194894 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1194894 ']' 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:24.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:24.620 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.620 [2024-12-05 11:56:49.059225] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:24.620 [2024-12-05 11:56:49.059288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:24.620 [2024-12-05 11:56:49.165585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:24.620 [2024-12-05 11:56:49.216967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:24.620 [2024-12-05 11:56:49.217019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:24.620 [2024-12-05 11:56:49.217028] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:24.620 [2024-12-05 11:56:49.217035] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:24.620 [2024-12-05 11:56:49.217042] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:24.620 [2024-12-05 11:56:49.217823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.881 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:24.881 [2024-12-05 11:56:49.930515] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 Malloc0 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 [2024-12-05 11:56:49.991673] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1195100 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1195100 /var/tmp/bdevperf.sock 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1195100 ']' 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:25.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:25.143 11:56:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.143 [2024-12-05 11:56:50.050627] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:25.143 [2024-12-05 11:56:50.050701] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195100 ] 00:13:25.143 [2024-12-05 11:56:50.145414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:25.403 [2024-12-05 11:56:50.201838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:25.974 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:25.974 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:13:25.974 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:13:25.974 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.974 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:25.974 NVMe0n1 00:13:25.974 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.975 11:56:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:13:26.234 Running I/O for 10 seconds... 00:13:28.113 9150.00 IOPS, 35.74 MiB/s [2024-12-05T10:56:54.105Z] 10309.00 IOPS, 40.27 MiB/s [2024-12-05T10:56:55.488Z] 10792.67 IOPS, 42.16 MiB/s [2024-12-05T10:56:56.427Z] 11059.50 IOPS, 43.20 MiB/s [2024-12-05T10:56:57.366Z] 11468.60 IOPS, 44.80 MiB/s [2024-12-05T10:56:58.305Z] 11753.67 IOPS, 45.91 MiB/s [2024-12-05T10:56:59.243Z] 11963.86 IOPS, 46.73 MiB/s [2024-12-05T10:57:00.181Z] 12154.50 IOPS, 47.48 MiB/s [2024-12-05T10:57:01.121Z] 12286.67 IOPS, 47.99 MiB/s [2024-12-05T10:57:01.381Z] 12433.00 IOPS, 48.57 MiB/s 00:13:36.332 Latency(us) 00:13:36.332 [2024-12-05T10:57:01.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.332 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:13:36.332 Verification LBA range: start 0x0 length 0x4000 00:13:36.332 NVMe0n1 : 10.05 12464.84 48.69 0.00 0.00 81841.75 10212.69 73837.23 00:13:36.332 [2024-12-05T10:57:01.381Z] =================================================================================================================== 00:13:36.332 [2024-12-05T10:57:01.381Z] Total : 12464.84 48.69 0.00 0.00 81841.75 10212.69 73837.23 00:13:36.332 { 00:13:36.332 "results": [ 00:13:36.332 { 00:13:36.332 "job": "NVMe0n1", 00:13:36.332 "core_mask": "0x1", 00:13:36.332 "workload": "verify", 00:13:36.332 "status": "finished", 00:13:36.332 "verify_range": { 00:13:36.332 "start": 0, 00:13:36.332 "length": 16384 00:13:36.332 }, 00:13:36.332 "queue_depth": 1024, 00:13:36.332 "io_size": 4096, 00:13:36.332 "runtime": 10.046415, 00:13:36.332 "iops": 12464.844424603205, 00:13:36.332 "mibps": 48.69079853360627, 00:13:36.332 "io_failed": 0, 00:13:36.332 "io_timeout": 0, 00:13:36.332 "avg_latency_us": 81841.74903282306, 00:13:36.332 "min_latency_us": 10212.693333333333, 00:13:36.332 "max_latency_us": 73837.22666666667 00:13:36.332 } 00:13:36.332 ], 00:13:36.332 "core_count": 1 00:13:36.332 } 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1195100 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1195100 ']' 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1195100 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195100 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195100' 00:13:36.332 killing process with pid 1195100 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1195100 00:13:36.332 Received shutdown signal, test time was about 10.000000 seconds 00:13:36.332 00:13:36.332 Latency(us) 00:13:36.332 [2024-12-05T10:57:01.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.332 [2024-12-05T10:57:01.381Z] =================================================================================================================== 00:13:36.332 [2024-12-05T10:57:01.381Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1195100 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:36.332 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:36.332 rmmod nvme_tcp 00:13:36.332 rmmod nvme_fabrics 00:13:36.593 rmmod nvme_keyring 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 1194894 ']' 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 1194894 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1194894 ']' 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1194894 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194894 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194894' 00:13:36.593 killing process with pid 1194894 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1194894 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1194894 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:36.593 11:57:01 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:13:39.154 00:13:39.154 real 0m22.534s 00:13:39.154 user 0m25.707s 00:13:39.154 sys 0m7.121s 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:13:39.154 ************************************ 00:13:39.154 END TEST nvmf_queue_depth 00:13:39.154 ************************************ 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:39.154 ************************************ 00:13:39.154 START TEST nvmf_target_multipath 00:13:39.154 ************************************ 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:13:39.154 * Looking for test storage... 00:13:39.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:39.154 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.155 --rc genhtml_branch_coverage=1 00:13:39.155 --rc genhtml_function_coverage=1 00:13:39.155 --rc genhtml_legend=1 00:13:39.155 --rc geninfo_all_blocks=1 00:13:39.155 --rc geninfo_unexecuted_blocks=1 00:13:39.155 00:13:39.155 ' 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.155 --rc genhtml_branch_coverage=1 00:13:39.155 --rc genhtml_function_coverage=1 00:13:39.155 --rc genhtml_legend=1 00:13:39.155 --rc geninfo_all_blocks=1 00:13:39.155 --rc geninfo_unexecuted_blocks=1 00:13:39.155 00:13:39.155 ' 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.155 --rc genhtml_branch_coverage=1 00:13:39.155 --rc genhtml_function_coverage=1 00:13:39.155 --rc genhtml_legend=1 00:13:39.155 --rc geninfo_all_blocks=1 00:13:39.155 --rc geninfo_unexecuted_blocks=1 00:13:39.155 00:13:39.155 ' 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:39.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:39.155 --rc genhtml_branch_coverage=1 00:13:39.155 --rc genhtml_function_coverage=1 00:13:39.155 --rc genhtml_legend=1 00:13:39.155 --rc geninfo_all_blocks=1 00:13:39.155 --rc geninfo_unexecuted_blocks=1 00:13:39.155 00:13:39.155 ' 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:39.155 11:57:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:39.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:39.155 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:39.156 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:39.156 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:39.156 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:13:39.156 11:57:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:47.295 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:47.295 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:47.295 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:47.295 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:47.295 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:47.296 10.0.0.1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:47.296 10.0.0.2 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:47.296 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:47.296 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.296 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.631 ms 00:13:47.296 00:13:47.296 --- 10.0.0.1 ping statistics --- 00:13:47.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.296 rtt min/avg/max/mdev = 0.631/0.631/0.631/0.000 ms 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:47.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:13:47.297 00:13:47.297 --- 10.0.0.2 ping statistics --- 00:13:47.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.297 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:13:47.297 ' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:13:47.297 only one NIC for nvmf test 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:13:47.297 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:47.298 rmmod nvme_tcp 00:13:47.298 rmmod nvme_fabrics 00:13:47.298 rmmod nvme_keyring 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:47.298 11:57:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:49.233 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:13:49.234 00:13:49.234 real 0m10.174s 00:13:49.234 user 0m2.170s 00:13:49.234 sys 0m5.956s 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:13:49.234 ************************************ 00:13:49.234 END TEST nvmf_target_multipath 00:13:49.234 ************************************ 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:49.234 11:57:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:49.234 ************************************ 00:13:49.234 START TEST nvmf_zcopy 00:13:49.234 ************************************ 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:13:49.234 * Looking for test storage... 00:13:49.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:49.234 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:13:49.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.234 --rc genhtml_branch_coverage=1 00:13:49.234 --rc genhtml_function_coverage=1 00:13:49.234 --rc genhtml_legend=1 00:13:49.234 --rc geninfo_all_blocks=1 00:13:49.235 --rc geninfo_unexecuted_blocks=1 00:13:49.235 00:13:49.235 ' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:13:49.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.235 --rc genhtml_branch_coverage=1 00:13:49.235 --rc genhtml_function_coverage=1 00:13:49.235 --rc genhtml_legend=1 00:13:49.235 --rc geninfo_all_blocks=1 00:13:49.235 --rc geninfo_unexecuted_blocks=1 00:13:49.235 00:13:49.235 ' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:13:49.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.235 --rc genhtml_branch_coverage=1 00:13:49.235 --rc genhtml_function_coverage=1 00:13:49.235 --rc genhtml_legend=1 00:13:49.235 --rc geninfo_all_blocks=1 00:13:49.235 --rc geninfo_unexecuted_blocks=1 00:13:49.235 00:13:49.235 ' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:13:49.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:49.235 --rc genhtml_branch_coverage=1 00:13:49.235 --rc genhtml_function_coverage=1 00:13:49.235 --rc genhtml_legend=1 00:13:49.235 --rc geninfo_all_blocks=1 00:13:49.235 --rc geninfo_unexecuted_blocks=1 00:13:49.235 00:13:49.235 ' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:49.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:13:49.235 11:57:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:57.376 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:57.376 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:57.376 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:57.376 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:57.376 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:57.377 10.0.0.1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:57.377 10.0.0.2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:57.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:57.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.615 ms 00:13:57.377 00:13:57.377 --- 10.0.0.1 ping statistics --- 00:13:57.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.377 rtt min/avg/max/mdev = 0.615/0.615/0.615/0.000 ms 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:57.377 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:57.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:57.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:13:57.378 00:13:57.378 --- 10.0.0.2 ping statistics --- 00:13:57.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:57.378 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:13:57.378 ' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=1206442 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 1206442 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1206442 ']' 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.378 11:57:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.378 [2024-12-05 11:57:21.964933] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:57.378 [2024-12-05 11:57:21.964999] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.378 [2024-12-05 11:57:22.065900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.378 [2024-12-05 11:57:22.116663] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.378 [2024-12-05 11:57:22.116713] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.378 [2024-12-05 11:57:22.116722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.378 [2024-12-05 11:57:22.116729] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.378 [2024-12-05 11:57:22.116736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.378 [2024-12-05 11:57:22.117514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 [2024-12-05 11:57:22.832919] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 [2024-12-05 11:57:22.857231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 malloc0 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:13:57.950 { 00:13:57.950 "params": { 00:13:57.950 "name": "Nvme$subsystem", 00:13:57.950 "trtype": "$TEST_TRANSPORT", 00:13:57.950 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:57.950 "adrfam": "ipv4", 00:13:57.950 "trsvcid": "$NVMF_PORT", 00:13:57.950 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:57.950 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:57.950 "hdgst": ${hdgst:-false}, 00:13:57.950 "ddgst": ${ddgst:-false} 00:13:57.950 }, 00:13:57.950 "method": "bdev_nvme_attach_controller" 00:13:57.950 } 00:13:57.950 EOF 00:13:57.950 )") 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:13:57.950 11:57:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:13:57.950 "params": { 00:13:57.950 "name": "Nvme1", 00:13:57.950 "trtype": "tcp", 00:13:57.950 "traddr": "10.0.0.2", 00:13:57.950 "adrfam": "ipv4", 00:13:57.950 "trsvcid": "4420", 00:13:57.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:57.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:57.950 "hdgst": false, 00:13:57.950 "ddgst": false 00:13:57.950 }, 00:13:57.950 "method": "bdev_nvme_attach_controller" 00:13:57.950 }' 00:13:57.950 [2024-12-05 11:57:22.958096] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:13:57.951 [2024-12-05 11:57:22.958164] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206754 ] 00:13:58.211 [2024-12-05 11:57:23.051776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.211 [2024-12-05 11:57:23.104977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.473 Running I/O for 10 seconds... 00:14:00.355 6461.00 IOPS, 50.48 MiB/s [2024-12-05T10:57:26.786Z] 7607.50 IOPS, 59.43 MiB/s [2024-12-05T10:57:27.730Z] 8349.33 IOPS, 65.23 MiB/s [2024-12-05T10:57:28.670Z] 8724.50 IOPS, 68.16 MiB/s [2024-12-05T10:57:29.612Z] 8950.80 IOPS, 69.93 MiB/s [2024-12-05T10:57:30.550Z] 9098.83 IOPS, 71.08 MiB/s [2024-12-05T10:57:31.488Z] 9202.57 IOPS, 71.90 MiB/s [2024-12-05T10:57:32.428Z] 9282.00 IOPS, 72.52 MiB/s [2024-12-05T10:57:33.811Z] 9340.00 IOPS, 72.97 MiB/s [2024-12-05T10:57:33.811Z] 9390.40 IOPS, 73.36 MiB/s 00:14:08.762 Latency(us) 00:14:08.762 [2024-12-05T10:57:33.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:08.762 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:14:08.762 Verification LBA range: start 0x0 length 0x1000 00:14:08.762 Nvme1n1 : 10.01 9391.69 73.37 0.00 0.00 13579.85 1221.97 30801.92 00:14:08.762 [2024-12-05T10:57:33.811Z] =================================================================================================================== 00:14:08.762 [2024-12-05T10:57:33.811Z] Total : 9391.69 73.37 0.00 0.00 13579.85 1221.97 30801.92 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1208774 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:14:08.762 { 00:14:08.762 "params": { 00:14:08.762 "name": "Nvme$subsystem", 00:14:08.762 "trtype": "$TEST_TRANSPORT", 00:14:08.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:08.762 "adrfam": "ipv4", 00:14:08.762 "trsvcid": "$NVMF_PORT", 00:14:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:08.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:08.762 "hdgst": ${hdgst:-false}, 00:14:08.762 "ddgst": ${ddgst:-false} 00:14:08.762 }, 00:14:08.762 "method": "bdev_nvme_attach_controller" 00:14:08.762 } 00:14:08.762 EOF 00:14:08.762 )") 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:14:08.762 [2024-12-05 11:57:33.498345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.498373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:14:08.762 11:57:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:14:08.762 "params": { 00:14:08.762 "name": "Nvme1", 00:14:08.762 "trtype": "tcp", 00:14:08.762 "traddr": "10.0.0.2", 00:14:08.762 "adrfam": "ipv4", 00:14:08.762 "trsvcid": "4420", 00:14:08.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:08.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:08.762 "hdgst": false, 00:14:08.762 "ddgst": false 00:14:08.762 }, 00:14:08.762 "method": "bdev_nvme_attach_controller" 00:14:08.762 }' 00:14:08.762 [2024-12-05 11:57:33.510347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.510357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.522373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.522381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.534404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.534412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.540688] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:14:08.762 [2024-12-05 11:57:33.540736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208774 ] 00:14:08.762 [2024-12-05 11:57:33.546433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.546441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.558468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.558476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.570499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.570506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.582529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.582537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.594559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.594567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.606590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.606598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.618620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.618628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.624129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.762 [2024-12-05 11:57:33.630653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.630661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.642683] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.642691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.653608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.762 [2024-12-05 11:57:33.654713] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.654721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.666748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.666758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.678779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.678792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.690808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.690819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.702839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.702850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.714868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.714876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.726908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.726925] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.738932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.738942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.750964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.750975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.762995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.763003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.775025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.775033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.787057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.787065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.799091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.799101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:08.762 [2024-12-05 11:57:33.811122] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:08.762 [2024-12-05 11:57:33.811133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.823152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.823161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.835185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.835194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.847220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.847230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.859250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.859258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.871280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.871289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.883312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.883319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.895344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.895354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.907375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.907383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.919406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.919413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.931436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.931444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.943484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.943497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.985955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.985969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:33.995625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:33.995635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 Running I/O for 5 seconds... 00:14:09.023 [2024-12-05 11:57:34.011421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:34.011437] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:34.024634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:34.024650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:34.038264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:34.038280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:34.051078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:34.051098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.023 [2024-12-05 11:57:34.063242] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.023 [2024-12-05 11:57:34.063258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.076006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.076022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.089720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.089736] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.102409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.102425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.115142] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.115157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.128342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.128357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.141279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.141294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.154547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.154563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.167995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.168010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.181703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.181718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.194413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.194429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.206704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.206720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.284 [2024-12-05 11:57:34.219074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.284 [2024-12-05 11:57:34.219089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.231628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.231644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.244362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.244378] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.257930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.257945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.270273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.270289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.283711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.283728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.297018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.297042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.309710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.309726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.285 [2024-12-05 11:57:34.322564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.285 [2024-12-05 11:57:34.322580] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.335206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.335222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.347874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.347890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.360592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.360608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.374159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.374174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.387385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.387401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.399747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.399763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.412337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.412353] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.425667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.425682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.438073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.438088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.451337] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.451352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.464031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.464046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.477655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.477670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.490372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.490387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.504093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.504108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.517360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.517376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.530174] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.530189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.542707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.542726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.555984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.555999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.569063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.569077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.546 [2024-12-05 11:57:34.582400] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.546 [2024-12-05 11:57:34.582414] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.596043] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.596059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.609167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.609182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.622509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.622524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.635915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.635930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.648923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.648938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.661953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.661968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.675364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.675380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.688186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.688201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.701094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.701110] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.714206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.714221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.727673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.727689] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.740614] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.740629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.754203] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.754218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.766433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.766448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.779461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.779476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.792666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.792684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.806106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.806121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.819026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.819042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.832228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.832243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:09.808 [2024-12-05 11:57:34.845298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:09.808 [2024-12-05 11:57:34.845313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.857806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.857822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.870920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.870935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.883792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.883807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.897056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.897071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.909918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.909933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.923564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.923579] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.936496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.936511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.949525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.949540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.962694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.962709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.975281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.975296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:34.988512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:34.988527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 19168.00 IOPS, 149.75 MiB/s [2024-12-05T10:57:35.119Z] [2024-12-05 11:57:35.001219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.001234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.014402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.014417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.027682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.027698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.041175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.041190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.054033] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.054048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.067351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.067365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.080841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.080856] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.070 [2024-12-05 11:57:35.094301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.070 [2024-12-05 11:57:35.094316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.071 [2024-12-05 11:57:35.106608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.071 [2024-12-05 11:57:35.106623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.071 [2024-12-05 11:57:35.119468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.071 [2024-12-05 11:57:35.119483] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.132552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.132568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.145241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.145256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.158039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.158054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.171609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.171624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.185082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.185097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.197695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.197710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.211398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.211412] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.223706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.223720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.236913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.236928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.250239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.250255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.263035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.263050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.275147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.275162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.288320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.288336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.301371] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.301386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.314792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.314807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.327257] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.327272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.339639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.339654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.351984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.351998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.365604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.365619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.332 [2024-12-05 11:57:35.379295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.332 [2024-12-05 11:57:35.379310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.392708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.392724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.406005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.406020] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.419314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.419329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.431852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.431867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.444859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.444874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.457790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.457805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.470929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.470944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.484569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.484584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.496865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.496881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.510480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.510496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.523377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.523398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.536359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.536375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.549432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.549447] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.562657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.562671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.575720] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.575734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.589086] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.589101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.602300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.602315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.615414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.615429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.628601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.628615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.593 [2024-12-05 11:57:35.642041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.593 [2024-12-05 11:57:35.642056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.655586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.655601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.668757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.668772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.682196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.682211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.695756] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.695771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.709166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.709181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.722692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.722708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.735360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.735375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.748843] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.748859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.761687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.761702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.775327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.775346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.787546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.787561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.800684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.800699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.814139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.814155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.827361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.827377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.840572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.840587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.853716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.853731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.867186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.867201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.880259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.880274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:10.855 [2024-12-05 11:57:35.892822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:10.855 [2024-12-05 11:57:35.892838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.905847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.905862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.918857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.918873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.931770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.931786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.945071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.945086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.958015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.958030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.970799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.970814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.983945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.983960] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:35.997926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:35.997942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 19277.50 IOPS, 150.61 MiB/s [2024-12-05T10:57:36.166Z] [2024-12-05 11:57:36.010602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.010617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.023607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.023627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.036936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.036951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.050312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.050327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.062970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.062985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.075751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.075765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.088636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.088651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.101837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.101853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.114526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.114542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.127518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.127533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.140480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.140495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.117 [2024-12-05 11:57:36.154079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.117 [2024-12-05 11:57:36.154094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.167246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.167262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.179836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.179851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.193290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.193305] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.206372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.206387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.220021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.220036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.233291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.233306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.246284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.246299] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.259717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.259732] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.273083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.273097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.285921] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.285935] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.298889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.298904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.312572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.312587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.325381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.325396] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.338917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.338932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.351882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.351897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.364808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.378 [2024-12-05 11:57:36.364823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.378 [2024-12-05 11:57:36.377617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.379 [2024-12-05 11:57:36.377631] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.379 [2024-12-05 11:57:36.390271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.379 [2024-12-05 11:57:36.390285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.379 [2024-12-05 11:57:36.403018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.379 [2024-12-05 11:57:36.403033] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.379 [2024-12-05 11:57:36.415901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.379 [2024-12-05 11:57:36.415915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.639 [2024-12-05 11:57:36.429187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.639 [2024-12-05 11:57:36.429202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.442361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.442375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.455967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.455981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.469359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.469373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.482822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.482837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.495523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.495538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.508241] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.508256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.521863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.521878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.534838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.534853] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.548133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.548148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.561280] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.561295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.574288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.574303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.587409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.587424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.600602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.600616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.613358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.613373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.626861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.626875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.640797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.640813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.653836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.653851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.667258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.667273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.640 [2024-12-05 11:57:36.680658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.640 [2024-12-05 11:57:36.680673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.694300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.694315] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.706727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.706742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.719852] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.719866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.732867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.732882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.746480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.746496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.759249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.759264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.772178] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.772192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.785341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.785356] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.798705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.798720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.811868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.811883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.824486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.824501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.837589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.837604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.851084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.851098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.863800] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.863815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.876104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.876119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.889253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.889268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.902653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.902668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.916168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.916183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.929959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.929974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:11.902 [2024-12-05 11:57:36.942892] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:11.902 [2024-12-05 11:57:36.942907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.163 [2024-12-05 11:57:36.956586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.163 [2024-12-05 11:57:36.956601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.163 [2024-12-05 11:57:36.969848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.163 [2024-12-05 11:57:36.969862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.163 [2024-12-05 11:57:36.983310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.163 [2024-12-05 11:57:36.983326] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.163 [2024-12-05 11:57:36.995796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:36.995811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 19314.00 IOPS, 150.89 MiB/s [2024-12-05T10:57:37.213Z] [2024-12-05 11:57:37.009409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.009429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.022669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.022684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.035648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.035663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.049130] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.049145] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.062572] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.062587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.075807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.075822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.089083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.089098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.102049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.102064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.114770] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.114785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.127114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.127129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.140472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.140487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.153573] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.153588] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.165946] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.165961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.179442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.179462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.192552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.192567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.164 [2024-12-05 11:57:37.205791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.164 [2024-12-05 11:57:37.205807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.218267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.218282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.231612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.231628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.244918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.244934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.257590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.257608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.271188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.271204] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.284133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.284148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.297763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.297778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.311225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.311241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.324413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.324430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.337164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.337181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.350269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.350284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.363508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.363524] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.376652] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.376667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.390070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.390085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.403080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.403096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.415464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.415479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.448 [2024-12-05 11:57:37.428872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.448 [2024-12-05 11:57:37.428888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.449 [2024-12-05 11:57:37.442287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.449 [2024-12-05 11:57:37.442303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.449 [2024-12-05 11:57:37.455598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.449 [2024-12-05 11:57:37.455613] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.449 [2024-12-05 11:57:37.468837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.449 [2024-12-05 11:57:37.468852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.449 [2024-12-05 11:57:37.482608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.449 [2024-12-05 11:57:37.482623] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.449 [2024-12-05 11:57:37.495802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.449 [2024-12-05 11:57:37.495818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.509100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.509119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.522546] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.522562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.535584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.535599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.548715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.548731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.561709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.561725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.574999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.575014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.588696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.588712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.600961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.600976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.613876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.613892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.626995] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.627010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.640308] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.640323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.653577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.653593] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.710 [2024-12-05 11:57:37.666526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.710 [2024-12-05 11:57:37.666541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.679765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.679780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.693278] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.693293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.706595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.706610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.719048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.719064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.732096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.732112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.744859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.744875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.711 [2024-12-05 11:57:37.758862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.711 [2024-12-05 11:57:37.758882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.771954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.771969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.784979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.784994] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.798364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.798379] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.811176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.811191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.824275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.824290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.837190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.837205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.850502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.850517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.863102] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.863118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.876416] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.876432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.890035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.890050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.902965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.902980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.916050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.916064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.929636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.929651] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.943205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.943219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.956855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.956870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.969985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.969999] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.982779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.982794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 [2024-12-05 11:57:37.995942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:37.995957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:12.971 19339.75 IOPS, 151.09 MiB/s [2024-12-05T10:57:38.020Z] [2024-12-05 11:57:38.008931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:12.971 [2024-12-05 11:57:38.008947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.022028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.022043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.035283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.035298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.048557] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.048572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.061010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.061025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.074146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.074160] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.086569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.086584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.099555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.099570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.112918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.112933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.126369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.126384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.138914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.138930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.151489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.151503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.164527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.164542] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.177986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.178001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.191574] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.191590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.204081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.204096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.216324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.216338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.228864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.228878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.241124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.241139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.254163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.254178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.267376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.231 [2024-12-05 11:57:38.267391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.231 [2024-12-05 11:57:38.280772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.280787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.293548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.293563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.306087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.306102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.318837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.318851] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.331935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.331949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.345293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.345308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.358628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.358643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.371208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.371223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.383567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.383582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.396918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.396934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.409748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.409764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.422103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.422118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.434873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.434888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.447867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.447882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.461022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.461036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.473739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.473754] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.486838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.486857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.500609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.500624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.513859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.513874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.527126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.527140] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.492 [2024-12-05 11:57:38.540168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.492 [2024-12-05 11:57:38.540182] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.553479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.553495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.566991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.567005] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.580000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.580014] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.593779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.593794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.607260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.607275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.619680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.619695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.633192] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.633206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.646497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.646513] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.659856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.659872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.673267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.673282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.686350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.686365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.699202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.699217] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.712519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.712534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.725576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.725591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.738472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.738490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.751458] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.751473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.764908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.764923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.777792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.777807] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:13.753 [2024-12-05 11:57:38.790972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:13.753 [2024-12-05 11:57:38.790987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.804207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.804222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.817072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.817087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.830134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.830148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.843303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.843318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.856576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.856591] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.869917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.869932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.883438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.883458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.896895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.896910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.910000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.910015] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.923136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.923151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.936535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.936550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.950052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.950067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.963305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.963320] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.976395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.976411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:38.988894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:38.988914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:39.002046] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:39.002062] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 19350.80 IOPS, 151.18 MiB/s [2024-12-05T10:57:39.064Z] [2024-12-05 11:57:39.013790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:39.013805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 00:14:14.015 Latency(us) 00:14:14.015 [2024-12-05T10:57:39.064Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:14.015 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:14:14.015 Nvme1n1 : 5.01 19351.61 151.18 0.00 0.00 6608.58 2676.05 17585.49 00:14:14.015 [2024-12-05T10:57:39.064Z] =================================================================================================================== 00:14:14.015 [2024-12-05T10:57:39.064Z] Total : 19351.61 151.18 0.00 0.00 6608.58 2676.05 17585.49 00:14:14.015 [2024-12-05 11:57:39.023970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:39.023983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:39.036008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:39.036022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:39.048031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:39.048043] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.015 [2024-12-05 11:57:39.060061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.015 [2024-12-05 11:57:39.060073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.276 [2024-12-05 11:57:39.072092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.276 [2024-12-05 11:57:39.072102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.276 [2024-12-05 11:57:39.084121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.276 [2024-12-05 11:57:39.084130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.276 [2024-12-05 11:57:39.096154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.276 [2024-12-05 11:57:39.096164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.276 [2024-12-05 11:57:39.108183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:14:14.276 [2024-12-05 11:57:39.108193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:14.276 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1208774) - No such process 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1208774 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.276 delay0 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:14.276 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.277 11:57:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:14:14.277 [2024-12-05 11:57:39.281797] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:14:22.411 Initializing NVMe Controllers 00:14:22.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:22.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:22.411 Initialization complete. Launching workers. 00:14:22.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 234, failed: 33153 00:14:22.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 33268, failed to submit 119 00:14:22.411 success 33187, unsuccessful 81, failed 0 00:14:22.411 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:14:22.411 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:14:22.411 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:22.411 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:14:22.411 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:22.412 rmmod nvme_tcp 00:14:22.412 rmmod nvme_fabrics 00:14:22.412 rmmod nvme_keyring 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 1206442 ']' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 1206442 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1206442 ']' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1206442 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1206442 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1206442' 00:14:22.412 killing process with pid 1206442 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1206442 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1206442 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:22.412 11:57:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:14:23.793 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:14:23.794 00:14:23.794 real 0m34.676s 00:14:23.794 user 0m45.559s 00:14:23.794 sys 0m12.025s 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:14:23.794 ************************************ 00:14:23.794 END TEST nvmf_zcopy 00:14:23.794 ************************************ 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:23.794 ************************************ 00:14:23.794 START TEST nvmf_nmic 00:14:23.794 ************************************ 00:14:23.794 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:14:24.054 * Looking for test storage... 00:14:24.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:24.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.054 --rc genhtml_branch_coverage=1 00:14:24.054 --rc genhtml_function_coverage=1 00:14:24.054 --rc genhtml_legend=1 00:14:24.054 --rc geninfo_all_blocks=1 00:14:24.054 --rc geninfo_unexecuted_blocks=1 00:14:24.054 00:14:24.054 ' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:24.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.054 --rc genhtml_branch_coverage=1 00:14:24.054 --rc genhtml_function_coverage=1 00:14:24.054 --rc genhtml_legend=1 00:14:24.054 --rc geninfo_all_blocks=1 00:14:24.054 --rc geninfo_unexecuted_blocks=1 00:14:24.054 00:14:24.054 ' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:24.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.054 --rc genhtml_branch_coverage=1 00:14:24.054 --rc genhtml_function_coverage=1 00:14:24.054 --rc genhtml_legend=1 00:14:24.054 --rc geninfo_all_blocks=1 00:14:24.054 --rc geninfo_unexecuted_blocks=1 00:14:24.054 00:14:24.054 ' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:24.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:24.054 --rc genhtml_branch_coverage=1 00:14:24.054 --rc genhtml_function_coverage=1 00:14:24.054 --rc genhtml_legend=1 00:14:24.054 --rc geninfo_all_blocks=1 00:14:24.054 --rc geninfo_unexecuted_blocks=1 00:14:24.054 00:14:24.054 ' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:24.054 11:57:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:24.054 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:24.054 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:24.055 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:14:24.055 11:57:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:32.197 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:32.197 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:32.197 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:32.197 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:32.198 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:32.198 10.0.0.1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:32.198 10.0.0.2 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:32.198 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:32.199 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:32.199 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.588 ms 00:14:32.199 00:14:32.199 --- 10.0.0.1 ping statistics --- 00:14:32.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.199 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:32.199 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:32.199 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:14:32.199 00:14:32.199 --- 10.0.0.2 ping statistics --- 00:14:32.199 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:32.199 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:14:32.199 ' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:32.199 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=1215653 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 1215653 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1215653 ']' 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:32.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:32.200 11:57:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.200 [2024-12-05 11:57:56.600990] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:14:32.200 [2024-12-05 11:57:56.601054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:32.200 [2024-12-05 11:57:56.703605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.200 [2024-12-05 11:57:56.759762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.200 [2024-12-05 11:57:56.759823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.200 [2024-12-05 11:57:56.759832] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.200 [2024-12-05 11:57:56.759840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.200 [2024-12-05 11:57:56.759847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.200 [2024-12-05 11:57:56.762020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.200 [2024-12-05 11:57:56.762184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.200 [2024-12-05 11:57:56.762352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:32.200 [2024-12-05 11:57:56.762351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.461 [2024-12-05 11:57:57.472888] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.461 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 Malloc0 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 [2024-12-05 11:57:57.551257] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:14:32.723 test case1: single bdev can't be used in multiple subsystems 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 [2024-12-05 11:57:57.587046] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:14:32.723 [2024-12-05 11:57:57.587079] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:14:32.723 [2024-12-05 11:57:57.587090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:14:32.723 request: 00:14:32.723 { 00:14:32.723 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:32.723 "namespace": { 00:14:32.723 "bdev_name": "Malloc0", 00:14:32.723 "no_auto_visible": false, 00:14:32.723 "hide_metadata": false 00:14:32.723 }, 00:14:32.723 "method": "nvmf_subsystem_add_ns", 00:14:32.723 "req_id": 1 00:14:32.723 } 00:14:32.723 Got JSON-RPC error response 00:14:32.723 response: 00:14:32.723 { 00:14:32.723 "code": -32602, 00:14:32.723 "message": "Invalid parameters" 00:14:32.723 } 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:14:32.723 Adding namespace failed - expected result. 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:14:32.723 test case2: host connect to nvmf target in multiple paths 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:32.723 [2024-12-05 11:57:57.599258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.723 11:57:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:34.113 11:57:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:14:36.028 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:14:36.028 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:14:36.028 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:36.028 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:36.028 11:58:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:14:37.581 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:37.882 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:37.882 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:37.882 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:37.882 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:37.882 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:14:37.882 11:58:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:37.882 [global] 00:14:37.882 thread=1 00:14:37.882 invalidate=1 00:14:37.882 rw=write 00:14:37.882 time_based=1 00:14:37.882 runtime=1 00:14:37.882 ioengine=libaio 00:14:37.882 direct=1 00:14:37.882 bs=4096 00:14:37.882 iodepth=1 00:14:37.882 norandommap=0 00:14:37.882 numjobs=1 00:14:37.882 00:14:37.882 verify_dump=1 00:14:37.882 verify_backlog=512 00:14:37.882 verify_state_save=0 00:14:37.882 do_verify=1 00:14:37.882 verify=crc32c-intel 00:14:37.882 [job0] 00:14:37.882 filename=/dev/nvme0n1 00:14:37.882 Could not set queue depth (nvme0n1) 00:14:38.147 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:38.147 fio-3.35 00:14:38.147 Starting 1 thread 00:14:39.086 00:14:39.087 job0: (groupid=0, jobs=1): err= 0: pid=1217040: Thu Dec 5 11:58:04 2024 00:14:39.087 read: IOPS=644, BW=2577KiB/s (2639kB/s)(2580KiB/1001msec) 00:14:39.087 slat (nsec): min=7219, max=59516, avg=24237.09, stdev=7755.85 00:14:39.087 clat (usec): min=421, max=1012, avg=759.79, stdev=71.65 00:14:39.087 lat (usec): min=430, max=1039, avg=784.03, stdev=74.05 00:14:39.087 clat percentiles (usec): 00:14:39.087 | 1.00th=[ 519], 5.00th=[ 635], 10.00th=[ 660], 20.00th=[ 701], 00:14:39.087 | 30.00th=[ 742], 40.00th=[ 766], 50.00th=[ 783], 60.00th=[ 791], 00:14:39.087 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 824], 95.00th=[ 840], 00:14:39.087 | 99.00th=[ 889], 99.50th=[ 889], 99.90th=[ 1012], 99.95th=[ 1012], 00:14:39.087 | 99.99th=[ 1012] 00:14:39.087 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:14:39.087 slat (usec): min=9, max=29688, avg=57.20, stdev=926.97 00:14:39.087 clat (usec): min=101, max=614, avg=414.12, stdev=71.11 00:14:39.087 lat (usec): min=112, max=30037, avg=471.31, stdev=928.02 00:14:39.087 clat percentiles (usec): 00:14:39.087 | 1.00th=[ 239], 5.00th=[ 281], 10.00th=[ 322], 20.00th=[ 347], 00:14:39.087 | 30.00th=[ 367], 40.00th=[ 408], 50.00th=[ 437], 60.00th=[ 457], 00:14:39.087 | 70.00th=[ 465], 80.00th=[ 474], 90.00th=[ 486], 95.00th=[ 498], 00:14:39.087 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 594], 99.95th=[ 611], 00:14:39.087 | 99.99th=[ 611] 00:14:39.087 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:14:39.087 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:39.087 lat (usec) : 250=1.26%, 500=57.52%, 750=14.68%, 1000=26.48% 00:14:39.087 lat (msec) : 2=0.06% 00:14:39.087 cpu : usr=2.50%, sys=4.40%, ctx=1673, majf=0, minf=1 00:14:39.087 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:39.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.087 issued rwts: total=645,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.087 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:39.087 00:14:39.087 Run status group 0 (all jobs): 00:14:39.087 READ: bw=2577KiB/s (2639kB/s), 2577KiB/s-2577KiB/s (2639kB/s-2639kB/s), io=2580KiB (2642kB), run=1001-1001msec 00:14:39.087 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:14:39.087 00:14:39.087 Disk stats (read/write): 00:14:39.087 nvme0n1: ios=569/1024, merge=0/0, ticks=801/417, in_queue=1218, util=98.70% 00:14:39.087 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.347 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:39.347 rmmod nvme_tcp 00:14:39.347 rmmod nvme_fabrics 00:14:39.347 rmmod nvme_keyring 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 1215653 ']' 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 1215653 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1215653 ']' 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1215653 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1215653 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1215653' 00:14:39.347 killing process with pid 1215653 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1215653 00:14:39.347 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1215653 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:39.607 11:58:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:14:42.151 00:14:42.151 real 0m17.818s 00:14:42.151 user 0m46.651s 00:14:42.151 sys 0m6.562s 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:14:42.151 ************************************ 00:14:42.151 END TEST nvmf_nmic 00:14:42.151 ************************************ 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:14:42.151 ************************************ 00:14:42.151 START TEST nvmf_fio_target 00:14:42.151 ************************************ 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:14:42.151 * Looking for test storage... 00:14:42.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:14:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.151 --rc genhtml_branch_coverage=1 00:14:42.151 --rc genhtml_function_coverage=1 00:14:42.151 --rc genhtml_legend=1 00:14:42.151 --rc geninfo_all_blocks=1 00:14:42.151 --rc geninfo_unexecuted_blocks=1 00:14:42.151 00:14:42.151 ' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:14:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.151 --rc genhtml_branch_coverage=1 00:14:42.151 --rc genhtml_function_coverage=1 00:14:42.151 --rc genhtml_legend=1 00:14:42.151 --rc geninfo_all_blocks=1 00:14:42.151 --rc geninfo_unexecuted_blocks=1 00:14:42.151 00:14:42.151 ' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:14:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.151 --rc genhtml_branch_coverage=1 00:14:42.151 --rc genhtml_function_coverage=1 00:14:42.151 --rc genhtml_legend=1 00:14:42.151 --rc geninfo_all_blocks=1 00:14:42.151 --rc geninfo_unexecuted_blocks=1 00:14:42.151 00:14:42.151 ' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:14:42.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.151 --rc genhtml_branch_coverage=1 00:14:42.151 --rc genhtml_function_coverage=1 00:14:42.151 --rc genhtml_legend=1 00:14:42.151 --rc geninfo_all_blocks=1 00:14:42.151 --rc geninfo_unexecuted_blocks=1 00:14:42.151 00:14:42.151 ' 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.151 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:42.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:14:42.152 11:58:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:14:50.317 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:50.318 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:50.318 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:50.318 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:50.318 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:50.318 10.0.0.1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:50.318 10.0.0.2 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:50.318 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:50.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.639 ms 00:14:50.319 00:14:50.319 --- 10.0.0.1 ping statistics --- 00:14:50.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.319 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:50.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:14:50.319 00:14:50.319 --- 10.0.0.2 ping statistics --- 00:14:50.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.319 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:14:50.319 ' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=1221738 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 1221738 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1221738 ']' 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.319 11:58:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.319 [2024-12-05 11:58:14.631610] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:14:50.319 [2024-12-05 11:58:14.631680] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:50.319 [2024-12-05 11:58:14.726351] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:50.320 [2024-12-05 11:58:14.786617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:50.320 [2024-12-05 11:58:14.786695] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:50.320 [2024-12-05 11:58:14.786706] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:50.320 [2024-12-05 11:58:14.786716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:50.320 [2024-12-05 11:58:14.786725] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:50.320 [2024-12-05 11:58:14.789451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:50.320 [2024-12-05 11:58:14.789615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:50.320 [2024-12-05 11:58:14.789778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:50.320 [2024-12-05 11:58:14.789783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:50.581 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:50.842 [2024-12-05 11:58:15.787037] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:50.842 11:58:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.103 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:14:51.103 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.364 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:14:51.364 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.625 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:14:51.625 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:51.886 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:14:51.886 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:14:51.886 11:58:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.147 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:14:52.147 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.409 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:14:52.409 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:52.671 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:14:52.671 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:14:52.671 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:52.931 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:52.931 11:58:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:53.192 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:14:53.192 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:53.452 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:53.452 [2024-12-05 11:58:18.426410] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:53.452 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:14:53.713 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:14:53.974 11:58:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:55.359 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:14:55.359 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:14:55.359 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.359 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:14:55.359 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:14:55.359 11:58:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:14:57.900 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:57.900 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:57.900 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.900 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:14:57.901 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.901 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:14:57.901 11:58:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:14:57.901 [global] 00:14:57.901 thread=1 00:14:57.901 invalidate=1 00:14:57.901 rw=write 00:14:57.901 time_based=1 00:14:57.901 runtime=1 00:14:57.901 ioengine=libaio 00:14:57.901 direct=1 00:14:57.901 bs=4096 00:14:57.901 iodepth=1 00:14:57.901 norandommap=0 00:14:57.901 numjobs=1 00:14:57.901 00:14:57.901 verify_dump=1 00:14:57.901 verify_backlog=512 00:14:57.901 verify_state_save=0 00:14:57.901 do_verify=1 00:14:57.901 verify=crc32c-intel 00:14:57.901 [job0] 00:14:57.901 filename=/dev/nvme0n1 00:14:57.901 [job1] 00:14:57.901 filename=/dev/nvme0n2 00:14:57.901 [job2] 00:14:57.901 filename=/dev/nvme0n3 00:14:57.901 [job3] 00:14:57.901 filename=/dev/nvme0n4 00:14:57.901 Could not set queue depth (nvme0n1) 00:14:57.901 Could not set queue depth (nvme0n2) 00:14:57.901 Could not set queue depth (nvme0n3) 00:14:57.901 Could not set queue depth (nvme0n4) 00:14:57.901 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.901 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.901 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.901 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:57.901 fio-3.35 00:14:57.901 Starting 4 threads 00:14:59.287 00:14:59.287 job0: (groupid=0, jobs=1): err= 0: pid=1223599: Thu Dec 5 11:58:24 2024 00:14:59.287 read: IOPS=16, BW=67.5KiB/s (69.1kB/s)(68.0KiB/1008msec) 00:14:59.287 slat (nsec): min=8221, max=27158, avg=24473.65, stdev=5866.08 00:14:59.287 clat (usec): min=783, max=42102, avg=39532.63, stdev=9985.70 00:14:59.287 lat (usec): min=793, max=42128, avg=39557.10, stdev=9989.54 00:14:59.287 clat percentiles (usec): 00:14:59.287 | 1.00th=[ 783], 5.00th=[ 783], 10.00th=[41681], 20.00th=[41681], 00:14:59.287 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:14:59.287 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:14:59.287 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:14:59.287 | 99.99th=[42206] 00:14:59.287 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:14:59.287 slat (nsec): min=9934, max=56194, avg=30379.72, stdev=10072.82 00:14:59.287 clat (usec): min=247, max=870, avg=617.12, stdev=108.39 00:14:59.287 lat (usec): min=257, max=905, avg=647.50, stdev=113.44 00:14:59.287 clat percentiles (usec): 00:14:59.287 | 1.00th=[ 355], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 537], 00:14:59.287 | 30.00th=[ 578], 40.00th=[ 603], 50.00th=[ 627], 60.00th=[ 652], 00:14:59.287 | 70.00th=[ 685], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 775], 00:14:59.287 | 99.00th=[ 816], 99.50th=[ 840], 99.90th=[ 873], 99.95th=[ 873], 00:14:59.287 | 99.99th=[ 873] 00:14:59.287 bw ( KiB/s): min= 4096, max= 4096, per=35.48%, avg=4096.00, stdev= 0.00, samples=1 00:14:59.287 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:59.287 lat (usec) : 250=0.19%, 500=14.18%, 750=73.72%, 1000=8.88% 00:14:59.287 lat (msec) : 50=3.02% 00:14:59.287 cpu : usr=0.70%, sys=1.49%, ctx=530, majf=0, minf=1 00:14:59.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.287 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.287 job1: (groupid=0, jobs=1): err= 0: pid=1223618: Thu Dec 5 11:58:24 2024 00:14:59.288 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:59.288 slat (nsec): min=7141, max=46843, avg=27220.26, stdev=5025.37 00:14:59.288 clat (usec): min=796, max=1329, avg=1078.52, stdev=84.04 00:14:59.288 lat (usec): min=804, max=1356, avg=1105.74, stdev=85.25 00:14:59.288 clat percentiles (usec): 00:14:59.288 | 1.00th=[ 873], 5.00th=[ 922], 10.00th=[ 963], 20.00th=[ 1012], 00:14:59.288 | 30.00th=[ 1045], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1106], 00:14:59.288 | 70.00th=[ 1123], 80.00th=[ 1139], 90.00th=[ 1172], 95.00th=[ 1205], 00:14:59.288 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1336], 99.95th=[ 1336], 00:14:59.288 | 99.99th=[ 1336] 00:14:59.288 write: IOPS=567, BW=2270KiB/s (2324kB/s)(2272KiB/1001msec); 0 zone resets 00:14:59.288 slat (usec): min=9, max=33061, avg=87.92, stdev=1386.01 00:14:59.288 clat (usec): min=178, max=997, avg=660.48, stdev=139.24 00:14:59.288 lat (usec): min=189, max=33726, avg=748.39, stdev=1393.68 00:14:59.288 clat percentiles (usec): 00:14:59.288 | 1.00th=[ 338], 5.00th=[ 408], 10.00th=[ 457], 20.00th=[ 529], 00:14:59.288 | 30.00th=[ 594], 40.00th=[ 660], 50.00th=[ 693], 60.00th=[ 717], 00:14:59.288 | 70.00th=[ 750], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 848], 00:14:59.288 | 99.00th=[ 898], 99.50th=[ 914], 99.90th=[ 996], 99.95th=[ 996], 00:14:59.288 | 99.99th=[ 996] 00:14:59.288 bw ( KiB/s): min= 4096, max= 4096, per=35.48%, avg=4096.00, stdev= 0.00, samples=1 00:14:59.288 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:59.288 lat (usec) : 250=0.09%, 500=8.80%, 750=28.52%, 1000=23.15% 00:14:59.288 lat (msec) : 2=39.44% 00:14:59.288 cpu : usr=1.90%, sys=3.80%, ctx=1082, majf=0, minf=1 00:14:59.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.288 issued rwts: total=512,568,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.288 job2: (groupid=0, jobs=1): err= 0: pid=1223639: Thu Dec 5 11:58:24 2024 00:14:59.288 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:14:59.288 slat (nsec): min=7267, max=45983, avg=26958.51, stdev=4294.67 00:14:59.288 clat (usec): min=171, max=41625, avg=931.89, stdev=2103.41 00:14:59.288 lat (usec): min=179, max=41653, avg=958.85, stdev=2103.51 00:14:59.288 clat percentiles (usec): 00:14:59.288 | 1.00th=[ 486], 5.00th=[ 553], 10.00th=[ 586], 20.00th=[ 635], 00:14:59.288 | 30.00th=[ 717], 40.00th=[ 766], 50.00th=[ 824], 60.00th=[ 889], 00:14:59.288 | 70.00th=[ 922], 80.00th=[ 955], 90.00th=[ 988], 95.00th=[ 1020], 00:14:59.288 | 99.00th=[ 1074], 99.50th=[ 1156], 99.90th=[41681], 99.95th=[41681], 00:14:59.288 | 99.99th=[41681] 00:14:59.288 write: IOPS=891, BW=3564KiB/s (3650kB/s)(3568KiB/1001msec); 0 zone resets 00:14:59.288 slat (nsec): min=9618, max=71723, avg=30752.86, stdev=11627.32 00:14:59.288 clat (usec): min=118, max=883, avg=528.41, stdev=139.51 00:14:59.288 lat (usec): min=129, max=918, avg=559.16, stdev=145.03 00:14:59.288 clat percentiles (usec): 00:14:59.288 | 1.00th=[ 137], 5.00th=[ 273], 10.00th=[ 322], 20.00th=[ 408], 00:14:59.288 | 30.00th=[ 474], 40.00th=[ 510], 50.00th=[ 545], 60.00th=[ 578], 00:14:59.288 | 70.00th=[ 611], 80.00th=[ 644], 90.00th=[ 701], 95.00th=[ 734], 00:14:59.288 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 881], 99.95th=[ 881], 00:14:59.288 | 99.99th=[ 881] 00:14:59.288 bw ( KiB/s): min= 4096, max= 4096, per=35.48%, avg=4096.00, stdev= 0.00, samples=1 00:14:59.288 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:14:59.288 lat (usec) : 250=1.92%, 500=22.22%, 750=50.85%, 1000=22.01% 00:14:59.288 lat (msec) : 2=2.85%, 50=0.14% 00:14:59.288 cpu : usr=2.90%, sys=4.10%, ctx=1405, majf=0, minf=1 00:14:59.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.288 issued rwts: total=512,892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.288 job3: (groupid=0, jobs=1): err= 0: pid=1223646: Thu Dec 5 11:58:24 2024 00:14:59.288 read: IOPS=494, BW=1977KiB/s (2024kB/s)(2052KiB/1038msec) 00:14:59.288 slat (nsec): min=7490, max=61605, avg=26114.81, stdev=7210.92 00:14:59.288 clat (usec): min=556, max=41191, avg=1026.76, stdev=1785.52 00:14:59.288 lat (usec): min=583, max=41219, avg=1052.88, stdev=1785.83 00:14:59.288 clat percentiles (usec): 00:14:59.288 | 1.00th=[ 627], 5.00th=[ 676], 10.00th=[ 734], 20.00th=[ 775], 00:14:59.288 | 30.00th=[ 807], 40.00th=[ 848], 50.00th=[ 963], 60.00th=[ 1037], 00:14:59.288 | 70.00th=[ 1074], 80.00th=[ 1123], 90.00th=[ 1172], 95.00th=[ 1205], 00:14:59.288 | 99.00th=[ 1237], 99.50th=[ 1270], 99.90th=[41157], 99.95th=[41157], 00:14:59.288 | 99.99th=[41157] 00:14:59.288 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:14:59.288 slat (nsec): min=9919, max=71424, avg=28869.69, stdev=11668.30 00:14:59.288 clat (usec): min=202, max=1203, avg=446.17, stdev=94.84 00:14:59.288 lat (usec): min=221, max=1239, avg=475.04, stdev=99.07 00:14:59.288 clat percentiles (usec): 00:14:59.288 | 1.00th=[ 260], 5.00th=[ 297], 10.00th=[ 330], 20.00th=[ 363], 00:14:59.288 | 30.00th=[ 404], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 469], 00:14:59.288 | 70.00th=[ 486], 80.00th=[ 510], 90.00th=[ 545], 95.00th=[ 603], 00:14:59.288 | 99.00th=[ 685], 99.50th=[ 717], 99.90th=[ 1123], 99.95th=[ 1205], 00:14:59.288 | 99.99th=[ 1205] 00:14:59.288 bw ( KiB/s): min= 4096, max= 4096, per=35.48%, avg=4096.00, stdev= 0.00, samples=2 00:14:59.288 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:14:59.288 lat (usec) : 250=0.52%, 500=50.94%, 750=19.71%, 1000=12.88% 00:14:59.288 lat (msec) : 2=15.88%, 50=0.07% 00:14:59.288 cpu : usr=2.60%, sys=4.44%, ctx=1538, majf=0, minf=2 00:14:59.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:59.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:59.288 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:59.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:59.288 00:14:59.288 Run status group 0 (all jobs): 00:14:59.288 READ: bw=5988KiB/s (6132kB/s), 67.5KiB/s-2046KiB/s (69.1kB/s-2095kB/s), io=6216KiB (6365kB), run=1001-1038msec 00:14:59.288 WRITE: bw=11.3MiB/s (11.8MB/s), 2032KiB/s-3946KiB/s (2081kB/s-4041kB/s), io=11.7MiB (12.3MB), run=1001-1038msec 00:14:59.288 00:14:59.288 Disk stats (read/write): 00:14:59.288 nvme0n1: ios=34/512, merge=0/0, ticks=1302/309, in_queue=1611, util=83.87% 00:14:59.288 nvme0n2: ios=445/512, merge=0/0, ticks=1287/322, in_queue=1609, util=87.74% 00:14:59.288 nvme0n3: ios=575/582, merge=0/0, ticks=1083/257, in_queue=1340, util=94.92% 00:14:59.288 nvme0n4: ios=535/689, merge=0/0, ticks=1333/292, in_queue=1625, util=94.11% 00:14:59.288 11:58:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:14:59.288 [global] 00:14:59.288 thread=1 00:14:59.289 invalidate=1 00:14:59.289 rw=randwrite 00:14:59.289 time_based=1 00:14:59.289 runtime=1 00:14:59.289 ioengine=libaio 00:14:59.289 direct=1 00:14:59.289 bs=4096 00:14:59.289 iodepth=1 00:14:59.289 norandommap=0 00:14:59.289 numjobs=1 00:14:59.289 00:14:59.289 verify_dump=1 00:14:59.289 verify_backlog=512 00:14:59.289 verify_state_save=0 00:14:59.289 do_verify=1 00:14:59.289 verify=crc32c-intel 00:14:59.289 [job0] 00:14:59.289 filename=/dev/nvme0n1 00:14:59.289 [job1] 00:14:59.289 filename=/dev/nvme0n2 00:14:59.289 [job2] 00:14:59.289 filename=/dev/nvme0n3 00:14:59.289 [job3] 00:14:59.289 filename=/dev/nvme0n4 00:14:59.289 Could not set queue depth (nvme0n1) 00:14:59.289 Could not set queue depth (nvme0n2) 00:14:59.289 Could not set queue depth (nvme0n3) 00:14:59.289 Could not set queue depth (nvme0n4) 00:14:59.550 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.550 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.550 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.550 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:14:59.550 fio-3.35 00:14:59.550 Starting 4 threads 00:15:00.935 00:15:00.935 job0: (groupid=0, jobs=1): err= 0: pid=1224074: Thu Dec 5 11:58:25 2024 00:15:00.935 read: IOPS=18, BW=75.8KiB/s (77.7kB/s)(76.0KiB/1002msec) 00:15:00.935 slat (nsec): min=8500, max=27541, avg=24201.47, stdev=3882.12 00:15:00.935 clat (usec): min=802, max=42029, avg=36735.31, stdev=13082.46 00:15:00.935 lat (usec): min=828, max=42054, avg=36759.52, stdev=13081.48 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[ 799], 5.00th=[ 799], 10.00th=[ 979], 20.00th=[41157], 00:15:00.935 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:15:00.935 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:00.935 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:00.935 | 99.99th=[42206] 00:15:00.935 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:15:00.935 slat (nsec): min=8961, max=77667, avg=24589.88, stdev=11247.71 00:15:00.935 clat (usec): min=249, max=915, avg=561.78, stdev=122.37 00:15:00.935 lat (usec): min=259, max=947, avg=586.37, stdev=126.90 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[ 322], 5.00th=[ 359], 10.00th=[ 392], 20.00th=[ 457], 00:15:00.935 | 30.00th=[ 490], 40.00th=[ 523], 50.00th=[ 562], 60.00th=[ 594], 00:15:00.935 | 70.00th=[ 627], 80.00th=[ 676], 90.00th=[ 725], 95.00th=[ 758], 00:15:00.935 | 99.00th=[ 848], 99.50th=[ 881], 99.90th=[ 914], 99.95th=[ 914], 00:15:00.935 | 99.99th=[ 914] 00:15:00.935 bw ( KiB/s): min= 4096, max= 4096, per=46.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:00.935 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:00.935 lat (usec) : 250=0.19%, 500=30.89%, 750=59.13%, 1000=6.59% 00:15:00.935 lat (msec) : 50=3.20% 00:15:00.935 cpu : usr=1.00%, sys=1.40%, ctx=532, majf=0, minf=1 00:15:00.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.935 job1: (groupid=0, jobs=1): err= 0: pid=1224092: Thu Dec 5 11:58:25 2024 00:15:00.935 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:15:00.935 slat (nsec): min=26552, max=60700, avg=27224.27, stdev=2167.37 00:15:00.935 clat (usec): min=798, max=41179, avg=1040.92, stdev=1778.04 00:15:00.935 lat (usec): min=826, max=41206, avg=1068.15, stdev=1778.02 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[ 832], 5.00th=[ 873], 10.00th=[ 906], 20.00th=[ 930], 00:15:00.935 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:15:00.935 | 70.00th=[ 988], 80.00th=[ 996], 90.00th=[ 1020], 95.00th=[ 1037], 00:15:00.935 | 99.00th=[ 1074], 99.50th=[ 1205], 99.90th=[41157], 99.95th=[41157], 00:15:00.935 | 99.99th=[41157] 00:15:00.935 write: IOPS=723, BW=2893KiB/s (2963kB/s)(2896KiB/1001msec); 0 zone resets 00:15:00.935 slat (nsec): min=9170, max=52613, avg=28416.96, stdev=10791.42 00:15:00.935 clat (usec): min=207, max=828, avg=583.97, stdev=105.70 00:15:00.935 lat (usec): min=217, max=862, avg=612.39, stdev=110.87 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[ 347], 5.00th=[ 404], 10.00th=[ 441], 20.00th=[ 486], 00:15:00.935 | 30.00th=[ 529], 40.00th=[ 570], 50.00th=[ 586], 60.00th=[ 611], 00:15:00.935 | 70.00th=[ 652], 80.00th=[ 676], 90.00th=[ 717], 95.00th=[ 750], 00:15:00.935 | 99.00th=[ 807], 99.50th=[ 816], 99.90th=[ 832], 99.95th=[ 832], 00:15:00.935 | 99.99th=[ 832] 00:15:00.935 bw ( KiB/s): min= 4096, max= 4096, per=46.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:00.935 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:00.935 lat (usec) : 250=0.08%, 500=14.08%, 750=41.42%, 1000=37.30% 00:15:00.935 lat (msec) : 2=7.04%, 50=0.08% 00:15:00.935 cpu : usr=3.30%, sys=3.80%, ctx=1238, majf=0, minf=1 00:15:00.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 issued rwts: total=512,724,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.935 job2: (groupid=0, jobs=1): err= 0: pid=1224111: Thu Dec 5 11:58:25 2024 00:15:00.935 read: IOPS=17, BW=69.6KiB/s (71.3kB/s)(72.0KiB/1034msec) 00:15:00.935 slat (nsec): min=26321, max=26837, avg=26562.61, stdev=143.43 00:15:00.935 clat (usec): min=41884, max=42299, avg=41980.19, stdev=88.74 00:15:00.935 lat (usec): min=41910, max=42325, avg=42006.75, stdev=88.69 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:15:00.935 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:00.935 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:00.935 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:00.935 | 99.99th=[42206] 00:15:00.935 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:15:00.935 slat (nsec): min=9547, max=64777, avg=30122.90, stdev=9246.08 00:15:00.935 clat (usec): min=123, max=785, avg=503.52, stdev=124.47 00:15:00.935 lat (usec): min=133, max=818, avg=533.64, stdev=128.08 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[ 167], 5.00th=[ 285], 10.00th=[ 351], 20.00th=[ 400], 00:15:00.935 | 30.00th=[ 429], 40.00th=[ 478], 50.00th=[ 515], 60.00th=[ 545], 00:15:00.935 | 70.00th=[ 570], 80.00th=[ 619], 90.00th=[ 660], 95.00th=[ 701], 00:15:00.935 | 99.00th=[ 742], 99.50th=[ 758], 99.90th=[ 783], 99.95th=[ 783], 00:15:00.935 | 99.99th=[ 783] 00:15:00.935 bw ( KiB/s): min= 4096, max= 4096, per=46.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:00.935 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:00.935 lat (usec) : 250=2.45%, 500=43.02%, 750=50.38%, 1000=0.75% 00:15:00.935 lat (msec) : 50=3.40% 00:15:00.935 cpu : usr=0.87%, sys=1.45%, ctx=531, majf=0, minf=1 00:15:00.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.935 job3: (groupid=0, jobs=1): err= 0: pid=1224118: Thu Dec 5 11:58:25 2024 00:15:00.935 read: IOPS=15, BW=63.4KiB/s (65.0kB/s)(64.0KiB/1009msec) 00:15:00.935 slat (nsec): min=26342, max=27142, avg=26579.88, stdev=203.06 00:15:00.935 clat (usec): min=41880, max=42118, avg=41958.07, stdev=60.49 00:15:00.935 lat (usec): min=41906, max=42144, avg=41984.65, stdev=60.49 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:15:00.935 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:15:00.935 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:00.935 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:00.935 | 99.99th=[42206] 00:15:00.935 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:15:00.935 slat (nsec): min=9771, max=55571, avg=30768.89, stdev=8641.57 00:15:00.935 clat (usec): min=273, max=983, avg=619.08, stdev=118.40 00:15:00.935 lat (usec): min=293, max=1018, avg=649.85, stdev=121.85 00:15:00.935 clat percentiles (usec): 00:15:00.935 | 1.00th=[ 322], 5.00th=[ 396], 10.00th=[ 461], 20.00th=[ 519], 00:15:00.935 | 30.00th=[ 562], 40.00th=[ 594], 50.00th=[ 635], 60.00th=[ 668], 00:15:00.935 | 70.00th=[ 693], 80.00th=[ 725], 90.00th=[ 758], 95.00th=[ 791], 00:15:00.935 | 99.00th=[ 840], 99.50th=[ 857], 99.90th=[ 988], 99.95th=[ 988], 00:15:00.935 | 99.99th=[ 988] 00:15:00.935 bw ( KiB/s): min= 4096, max= 4096, per=46.85%, avg=4096.00, stdev= 0.00, samples=1 00:15:00.935 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:00.935 lat (usec) : 500=16.29%, 750=68.94%, 1000=11.74% 00:15:00.935 lat (msec) : 50=3.03% 00:15:00.935 cpu : usr=0.89%, sys=1.49%, ctx=529, majf=0, minf=1 00:15:00.935 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:00.935 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:00.935 issued rwts: total=16,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:00.935 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:00.935 00:15:00.935 Run status group 0 (all jobs): 00:15:00.935 READ: bw=2186KiB/s (2238kB/s), 63.4KiB/s-2046KiB/s (65.0kB/s-2095kB/s), io=2260KiB (2314kB), run=1001-1034msec 00:15:00.935 WRITE: bw=8743KiB/s (8953kB/s), 1981KiB/s-2893KiB/s (2028kB/s-2963kB/s), io=9040KiB (9257kB), run=1001-1034msec 00:15:00.935 00:15:00.935 Disk stats (read/write): 00:15:00.935 nvme0n1: ios=65/512, merge=0/0, ticks=587/246, in_queue=833, util=87.78% 00:15:00.935 nvme0n2: ios=501/512, merge=0/0, ticks=1414/246, in_queue=1660, util=92.46% 00:15:00.936 nvme0n3: ios=42/512, merge=0/0, ticks=1030/234, in_queue=1264, util=97.26% 00:15:00.936 nvme0n4: ios=69/512, merge=0/0, ticks=1072/305, in_queue=1377, util=96.69% 00:15:00.936 11:58:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:15:00.936 [global] 00:15:00.936 thread=1 00:15:00.936 invalidate=1 00:15:00.936 rw=write 00:15:00.936 time_based=1 00:15:00.936 runtime=1 00:15:00.936 ioengine=libaio 00:15:00.936 direct=1 00:15:00.936 bs=4096 00:15:00.936 iodepth=128 00:15:00.936 norandommap=0 00:15:00.936 numjobs=1 00:15:00.936 00:15:00.936 verify_dump=1 00:15:00.936 verify_backlog=512 00:15:00.936 verify_state_save=0 00:15:00.936 do_verify=1 00:15:00.936 verify=crc32c-intel 00:15:00.936 [job0] 00:15:00.936 filename=/dev/nvme0n1 00:15:00.936 [job1] 00:15:00.936 filename=/dev/nvme0n2 00:15:00.936 [job2] 00:15:00.936 filename=/dev/nvme0n3 00:15:00.936 [job3] 00:15:00.936 filename=/dev/nvme0n4 00:15:00.936 Could not set queue depth (nvme0n1) 00:15:00.936 Could not set queue depth (nvme0n2) 00:15:00.936 Could not set queue depth (nvme0n3) 00:15:00.936 Could not set queue depth (nvme0n4) 00:15:01.195 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.195 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.195 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.195 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:01.195 fio-3.35 00:15:01.195 Starting 4 threads 00:15:02.579 00:15:02.579 job0: (groupid=0, jobs=1): err= 0: pid=1224573: Thu Dec 5 11:58:27 2024 00:15:02.579 read: IOPS=5730, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1005msec) 00:15:02.579 slat (nsec): min=947, max=20811k, avg=64817.26, stdev=621271.40 00:15:02.579 clat (usec): min=1671, max=39013, avg=10119.92, stdev=4136.94 00:15:02.579 lat (usec): min=3750, max=39038, avg=10184.74, stdev=4196.76 00:15:02.579 clat percentiles (usec): 00:15:02.579 | 1.00th=[ 4948], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7570], 00:15:02.579 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9372], 60.00th=[ 9765], 00:15:02.579 | 70.00th=[10421], 80.00th=[11207], 90.00th=[14615], 95.00th=[18744], 00:15:02.579 | 99.00th=[28443], 99.50th=[28443], 99.90th=[32113], 99.95th=[38536], 00:15:02.579 | 99.99th=[39060] 00:15:02.579 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:15:02.579 slat (nsec): min=1596, max=9977.5k, avg=64988.86, stdev=417531.07 00:15:02.579 clat (usec): min=1286, max=45323, avg=11216.27, stdev=8179.50 00:15:02.579 lat (usec): min=1298, max=45331, avg=11281.26, stdev=8225.08 00:15:02.579 clat percentiles (usec): 00:15:02.579 | 1.00th=[ 2442], 5.00th=[ 4113], 10.00th=[ 4621], 20.00th=[ 5407], 00:15:02.579 | 30.00th=[ 6456], 40.00th=[ 6849], 50.00th=[ 7308], 60.00th=[ 9110], 00:15:02.579 | 70.00th=[12911], 80.00th=[15926], 90.00th=[25035], 95.00th=[29754], 00:15:02.579 | 99.00th=[38011], 99.50th=[39584], 99.90th=[41157], 99.95th=[42730], 00:15:02.579 | 99.99th=[45351] 00:15:02.579 bw ( KiB/s): min=20392, max=28752, per=27.43%, avg=24572.00, stdev=5911.41, samples=2 00:15:02.579 iops : min= 5098, max= 7188, avg=6143.00, stdev=1477.85, samples=2 00:15:02.579 lat (msec) : 2=0.18%, 4=2.52%, 10=60.26%, 20=27.51%, 50=9.53% 00:15:02.579 cpu : usr=5.28%, sys=6.87%, ctx=406, majf=0, minf=1 00:15:02.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:02.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.579 issued rwts: total=5759,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.579 job1: (groupid=0, jobs=1): err= 0: pid=1224594: Thu Dec 5 11:58:27 2024 00:15:02.579 read: IOPS=3310, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1004msec) 00:15:02.579 slat (nsec): min=910, max=40881k, avg=143670.47, stdev=1165932.26 00:15:02.579 clat (usec): min=1722, max=83159, avg=18704.52, stdev=14322.52 00:15:02.579 lat (usec): min=3591, max=83166, avg=18848.19, stdev=14423.56 00:15:02.579 clat percentiles (usec): 00:15:02.579 | 1.00th=[ 4948], 5.00th=[ 5538], 10.00th=[ 6194], 20.00th=[ 7177], 00:15:02.579 | 30.00th=[ 8160], 40.00th=[10552], 50.00th=[15270], 60.00th=[19268], 00:15:02.579 | 70.00th=[23462], 80.00th=[26870], 90.00th=[33162], 95.00th=[43779], 00:15:02.579 | 99.00th=[73925], 99.50th=[73925], 99.90th=[76022], 99.95th=[76022], 00:15:02.579 | 99.99th=[83362] 00:15:02.579 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:15:02.579 slat (nsec): min=1708, max=14200k, avg=131364.00, stdev=711649.75 00:15:02.579 clat (usec): min=1241, max=69712, avg=18066.28, stdev=15246.42 00:15:02.579 lat (usec): min=1253, max=73251, avg=18197.64, stdev=15343.27 00:15:02.579 clat percentiles (usec): 00:15:02.579 | 1.00th=[ 2802], 5.00th=[ 4686], 10.00th=[ 6194], 20.00th=[ 6652], 00:15:02.579 | 30.00th=[ 7504], 40.00th=[ 8356], 50.00th=[13566], 60.00th=[15139], 00:15:02.579 | 70.00th=[19792], 80.00th=[26870], 90.00th=[40109], 95.00th=[56361], 00:15:02.579 | 99.00th=[66847], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:15:02.579 | 99.99th=[69731] 00:15:02.579 bw ( KiB/s): min= 9608, max=19064, per=16.00%, avg=14336.00, stdev=6686.40, samples=2 00:15:02.579 iops : min= 2402, max= 4766, avg=3584.00, stdev=1671.60, samples=2 00:15:02.579 lat (msec) : 2=0.22%, 4=1.91%, 10=36.52%, 20=28.53%, 50=27.63% 00:15:02.579 lat (msec) : 100=5.18% 00:15:02.579 cpu : usr=2.89%, sys=4.39%, ctx=424, majf=0, minf=2 00:15:02.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:15:02.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.579 issued rwts: total=3324,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.579 job2: (groupid=0, jobs=1): err= 0: pid=1224615: Thu Dec 5 11:58:27 2024 00:15:02.579 read: IOPS=6614, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1005msec) 00:15:02.579 slat (nsec): min=1019, max=14630k, avg=64603.52, stdev=522158.26 00:15:02.579 clat (usec): min=1260, max=30309, avg=9050.64, stdev=3447.34 00:15:02.579 lat (usec): min=1289, max=30315, avg=9115.25, stdev=3477.69 00:15:02.580 clat percentiles (usec): 00:15:02.580 | 1.00th=[ 2442], 5.00th=[ 3785], 10.00th=[ 6063], 20.00th=[ 7308], 00:15:02.580 | 30.00th=[ 7963], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8848], 00:15:02.580 | 70.00th=[ 9503], 80.00th=[10552], 90.00th=[12518], 95.00th=[15664], 00:15:02.580 | 99.00th=[19530], 99.50th=[26084], 99.90th=[30278], 99.95th=[30278], 00:15:02.580 | 99.99th=[30278] 00:15:02.580 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:15:02.580 slat (nsec): min=1709, max=12731k, avg=65853.92, stdev=443586.21 00:15:02.580 clat (usec): min=498, max=64940, avg=10052.95, stdev=8585.42 00:15:02.580 lat (usec): min=511, max=64950, avg=10118.81, stdev=8634.44 00:15:02.580 clat percentiles (usec): 00:15:02.580 | 1.00th=[ 1876], 5.00th=[ 3818], 10.00th=[ 4686], 20.00th=[ 5473], 00:15:02.580 | 30.00th=[ 6652], 40.00th=[ 7635], 50.00th=[ 7898], 60.00th=[ 8160], 00:15:02.580 | 70.00th=[ 8586], 80.00th=[10814], 90.00th=[17695], 95.00th=[28181], 00:15:02.580 | 99.00th=[53216], 99.50th=[58983], 99.90th=[63177], 99.95th=[64750], 00:15:02.580 | 99.99th=[64750] 00:15:02.580 bw ( KiB/s): min=23104, max=30144, per=29.72%, avg=26624.00, stdev=4978.03, samples=2 00:15:02.580 iops : min= 5776, max= 7536, avg=6656.00, stdev=1244.51, samples=2 00:15:02.580 lat (usec) : 500=0.01%, 750=0.13%, 1000=0.11% 00:15:02.580 lat (msec) : 2=0.48%, 4=5.05%, 10=70.17%, 20=19.27%, 50=4.16% 00:15:02.580 lat (msec) : 100=0.62% 00:15:02.580 cpu : usr=5.58%, sys=7.47%, ctx=627, majf=0, minf=1 00:15:02.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:15:02.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.580 issued rwts: total=6648,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.580 job3: (groupid=0, jobs=1): err= 0: pid=1224624: Thu Dec 5 11:58:27 2024 00:15:02.580 read: IOPS=5974, BW=23.3MiB/s (24.5MB/s)(23.5MiB/1006msec) 00:15:02.580 slat (nsec): min=939, max=20324k, avg=89056.74, stdev=733097.59 00:15:02.580 clat (usec): min=2017, max=55710, avg=12173.52, stdev=9077.33 00:15:02.580 lat (usec): min=2728, max=55737, avg=12262.58, stdev=9139.81 00:15:02.580 clat percentiles (usec): 00:15:02.580 | 1.00th=[ 4621], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7635], 00:15:02.580 | 30.00th=[ 7963], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9241], 00:15:02.580 | 70.00th=[10290], 80.00th=[12780], 90.00th=[27657], 95.00th=[35390], 00:15:02.580 | 99.00th=[46400], 99.50th=[47973], 99.90th=[51643], 99.95th=[51643], 00:15:02.580 | 99.99th=[55837] 00:15:02.580 write: IOPS=6107, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1006msec); 0 zone resets 00:15:02.580 slat (nsec): min=1718, max=13414k, avg=69608.30, stdev=521373.41 00:15:02.580 clat (usec): min=1027, max=39714, avg=8782.11, stdev=3661.80 00:15:02.580 lat (usec): min=1059, max=39724, avg=8851.72, stdev=3710.24 00:15:02.580 clat percentiles (usec): 00:15:02.580 | 1.00th=[ 3294], 5.00th=[ 4555], 10.00th=[ 4948], 20.00th=[ 6259], 00:15:02.580 | 30.00th=[ 7373], 40.00th=[ 7832], 50.00th=[ 8094], 60.00th=[ 8455], 00:15:02.580 | 70.00th=[ 9634], 80.00th=[11076], 90.00th=[12125], 95.00th=[14222], 00:15:02.580 | 99.00th=[19530], 99.50th=[26346], 99.90th=[39584], 99.95th=[39584], 00:15:02.580 | 99.99th=[39584] 00:15:02.580 bw ( KiB/s): min=17624, max=31528, per=27.44%, avg=24576.00, stdev=9831.61, samples=2 00:15:02.580 iops : min= 4406, max= 7882, avg=6144.00, stdev=2457.90, samples=2 00:15:02.580 lat (msec) : 2=0.10%, 4=1.44%, 10=66.85%, 20=25.34%, 50=6.17% 00:15:02.580 lat (msec) : 100=0.10% 00:15:02.580 cpu : usr=5.07%, sys=6.97%, ctx=466, majf=0, minf=1 00:15:02.580 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:15:02.580 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:02.580 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:02.580 issued rwts: total=6010,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:02.580 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:02.580 00:15:02.580 Run status group 0 (all jobs): 00:15:02.580 READ: bw=84.4MiB/s (88.5MB/s), 12.9MiB/s-25.8MiB/s (13.6MB/s-27.1MB/s), io=84.9MiB (89.1MB), run=1004-1006msec 00:15:02.580 WRITE: bw=87.5MiB/s (91.7MB/s), 13.9MiB/s-25.9MiB/s (14.6MB/s-27.1MB/s), io=88.0MiB (92.3MB), run=1004-1006msec 00:15:02.580 00:15:02.580 Disk stats (read/write): 00:15:02.580 nvme0n1: ios=4661/4687, merge=0/0, ticks=47239/54999, in_queue=102238, util=84.57% 00:15:02.580 nvme0n2: ios=3121/3159, merge=0/0, ticks=32921/32857, in_queue=65778, util=86.85% 00:15:02.580 nvme0n3: ios=5177/5568, merge=0/0, ticks=45356/56311, in_queue=101667, util=91.88% 00:15:02.580 nvme0n4: ios=4628/4935, merge=0/0, ticks=41075/32927, in_queue=74002, util=92.10% 00:15:02.580 11:58:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:15:02.580 [global] 00:15:02.580 thread=1 00:15:02.580 invalidate=1 00:15:02.580 rw=randwrite 00:15:02.580 time_based=1 00:15:02.580 runtime=1 00:15:02.580 ioengine=libaio 00:15:02.580 direct=1 00:15:02.580 bs=4096 00:15:02.580 iodepth=128 00:15:02.580 norandommap=0 00:15:02.580 numjobs=1 00:15:02.580 00:15:02.580 verify_dump=1 00:15:02.580 verify_backlog=512 00:15:02.580 verify_state_save=0 00:15:02.580 do_verify=1 00:15:02.580 verify=crc32c-intel 00:15:02.580 [job0] 00:15:02.580 filename=/dev/nvme0n1 00:15:02.580 [job1] 00:15:02.580 filename=/dev/nvme0n2 00:15:02.580 [job2] 00:15:02.580 filename=/dev/nvme0n3 00:15:02.580 [job3] 00:15:02.580 filename=/dev/nvme0n4 00:15:02.580 Could not set queue depth (nvme0n1) 00:15:02.580 Could not set queue depth (nvme0n2) 00:15:02.580 Could not set queue depth (nvme0n3) 00:15:02.580 Could not set queue depth (nvme0n4) 00:15:02.840 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.840 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.840 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.840 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:02.840 fio-3.35 00:15:02.840 Starting 4 threads 00:15:04.224 00:15:04.225 job0: (groupid=0, jobs=1): err= 0: pid=1225107: Thu Dec 5 11:58:29 2024 00:15:04.225 read: IOPS=7139, BW=27.9MiB/s (29.2MB/s)(28.0MiB/1004msec) 00:15:04.225 slat (nsec): min=898, max=6403.4k, avg=70021.89, stdev=388995.26 00:15:04.225 clat (usec): min=5531, max=30310, avg=9002.94, stdev=2870.13 00:15:04.225 lat (usec): min=5536, max=31015, avg=9072.96, stdev=2904.46 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 6325], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 7832], 00:15:04.225 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 8356], 60.00th=[ 8455], 00:15:04.225 | 70.00th=[ 8717], 80.00th=[ 8979], 90.00th=[ 9765], 95.00th=[15139], 00:15:04.225 | 99.00th=[22152], 99.50th=[25822], 99.90th=[29230], 99.95th=[30278], 00:15:04.225 | 99.99th=[30278] 00:15:04.225 write: IOPS=7316, BW=28.6MiB/s (30.0MB/s)(28.7MiB/1004msec); 0 zone resets 00:15:04.225 slat (nsec): min=1491, max=5081.3k, avg=63506.35, stdev=315846.91 00:15:04.225 clat (usec): min=1425, max=29864, avg=8547.23, stdev=3429.27 00:15:04.225 lat (usec): min=1433, max=29872, avg=8610.74, stdev=3456.95 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 5800], 5.00th=[ 6587], 10.00th=[ 6783], 20.00th=[ 7046], 00:15:04.225 | 30.00th=[ 7177], 40.00th=[ 7439], 50.00th=[ 7767], 60.00th=[ 8029], 00:15:04.225 | 70.00th=[ 8160], 80.00th=[ 8455], 90.00th=[10028], 95.00th=[15008], 00:15:04.225 | 99.00th=[26084], 99.50th=[28181], 99.90th=[29492], 99.95th=[29754], 00:15:04.225 | 99.99th=[29754] 00:15:04.225 bw ( KiB/s): min=28672, max=29072, per=28.20%, avg=28872.00, stdev=282.84, samples=2 00:15:04.225 iops : min= 7168, max= 7268, avg=7218.00, stdev=70.71, samples=2 00:15:04.225 lat (msec) : 2=0.06%, 4=0.01%, 10=90.42%, 20=7.40%, 50=2.11% 00:15:04.225 cpu : usr=4.29%, sys=6.88%, ctx=861, majf=0, minf=2 00:15:04.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:04.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.225 issued rwts: total=7168,7346,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.225 job1: (groupid=0, jobs=1): err= 0: pid=1225121: Thu Dec 5 11:58:29 2024 00:15:04.225 read: IOPS=8975, BW=35.1MiB/s (36.8MB/s)(35.2MiB/1003msec) 00:15:04.225 slat (nsec): min=984, max=6473.1k, avg=58995.13, stdev=421496.85 00:15:04.225 clat (usec): min=2144, max=13628, avg=7528.36, stdev=1682.87 00:15:04.225 lat (usec): min=2192, max=13636, avg=7587.36, stdev=1707.40 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 3490], 5.00th=[ 5473], 10.00th=[ 5866], 20.00th=[ 6456], 00:15:04.225 | 30.00th=[ 6718], 40.00th=[ 6915], 50.00th=[ 7111], 60.00th=[ 7373], 00:15:04.225 | 70.00th=[ 7767], 80.00th=[ 8717], 90.00th=[ 9896], 95.00th=[11076], 00:15:04.225 | 99.00th=[12387], 99.50th=[12649], 99.90th=[13042], 99.95th=[13042], 00:15:04.225 | 99.99th=[13566] 00:15:04.225 write: IOPS=9188, BW=35.9MiB/s (37.6MB/s)(36.0MiB/1003msec); 0 zone resets 00:15:04.225 slat (nsec): min=1641, max=10141k, avg=45669.22, stdev=264993.78 00:15:04.225 clat (usec): min=1484, max=16812, avg=6421.61, stdev=1382.59 00:15:04.225 lat (usec): min=1511, max=16825, avg=6467.28, stdev=1405.29 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 2409], 5.00th=[ 3556], 10.00th=[ 4424], 20.00th=[ 5735], 00:15:04.225 | 30.00th=[ 6456], 40.00th=[ 6587], 50.00th=[ 6718], 60.00th=[ 6849], 00:15:04.225 | 70.00th=[ 6980], 80.00th=[ 7111], 90.00th=[ 7242], 95.00th=[ 7439], 00:15:04.225 | 99.00th=[11469], 99.50th=[11469], 99.90th=[12780], 99.95th=[13042], 00:15:04.225 | 99.99th=[16909] 00:15:04.225 bw ( KiB/s): min=36864, max=36864, per=36.01%, avg=36864.00, stdev= 0.00, samples=2 00:15:04.225 iops : min= 9216, max= 9216, avg=9216.00, stdev= 0.00, samples=2 00:15:04.225 lat (msec) : 2=0.20%, 4=4.11%, 10=89.97%, 20=5.72% 00:15:04.225 cpu : usr=6.49%, sys=8.78%, ctx=992, majf=0, minf=1 00:15:04.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:15:04.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.225 issued rwts: total=9002,9216,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.225 job2: (groupid=0, jobs=1): err= 0: pid=1225138: Thu Dec 5 11:58:29 2024 00:15:04.225 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:15:04.225 slat (nsec): min=934, max=8683.4k, avg=109387.48, stdev=656563.71 00:15:04.225 clat (usec): min=3669, max=25074, avg=13554.06, stdev=2775.02 00:15:04.225 lat (usec): min=3674, max=25098, avg=13663.45, stdev=2842.89 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 7111], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[11863], 00:15:04.225 | 30.00th=[13042], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:15:04.225 | 70.00th=[14484], 80.00th=[15401], 90.00th=[16909], 95.00th=[17695], 00:15:04.225 | 99.00th=[18744], 99.50th=[20579], 99.90th=[22676], 99.95th=[22938], 00:15:04.225 | 99.99th=[25035] 00:15:04.225 write: IOPS=4963, BW=19.4MiB/s (20.3MB/s)(19.4MiB/1003msec); 0 zone resets 00:15:04.225 slat (nsec): min=1592, max=4392.7k, avg=94362.19, stdev=379216.87 00:15:04.225 clat (usec): min=1126, max=21769, avg=13007.23, stdev=4229.73 00:15:04.225 lat (usec): min=1137, max=21775, avg=13101.59, stdev=4259.77 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 2409], 5.00th=[ 5538], 10.00th=[ 7963], 20.00th=[ 8455], 00:15:04.225 | 30.00th=[ 8979], 40.00th=[13304], 50.00th=[13960], 60.00th=[14746], 00:15:04.225 | 70.00th=[15795], 80.00th=[16909], 90.00th=[17695], 95.00th=[19006], 00:15:04.225 | 99.00th=[20317], 99.50th=[20841], 99.90th=[21365], 99.95th=[21365], 00:15:04.225 | 99.99th=[21890] 00:15:04.225 bw ( KiB/s): min=18328, max=20480, per=18.95%, avg=19404.00, stdev=1521.69, samples=2 00:15:04.225 iops : min= 4582, max= 5120, avg=4851.00, stdev=380.42, samples=2 00:15:04.225 lat (msec) : 2=0.22%, 4=1.18%, 10=22.82%, 20=74.89%, 50=0.89% 00:15:04.225 cpu : usr=3.79%, sys=4.99%, ctx=665, majf=0, minf=2 00:15:04.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:15:04.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.225 issued rwts: total=4608,4978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.225 job3: (groupid=0, jobs=1): err= 0: pid=1225145: Thu Dec 5 11:58:29 2024 00:15:04.225 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:15:04.225 slat (nsec): min=927, max=7905.9k, avg=127974.71, stdev=725227.71 00:15:04.225 clat (usec): min=6449, max=52150, avg=16725.73, stdev=3506.40 00:15:04.225 lat (usec): min=6453, max=52153, avg=16853.71, stdev=3470.40 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 7242], 5.00th=[10683], 10.00th=[12256], 20.00th=[12911], 00:15:04.225 | 30.00th=[15139], 40.00th=[16909], 50.00th=[17957], 60.00th=[18482], 00:15:04.225 | 70.00th=[19006], 80.00th=[19268], 90.00th=[19792], 95.00th=[20579], 00:15:04.225 | 99.00th=[21365], 99.50th=[21365], 99.90th=[46924], 99.95th=[46924], 00:15:04.225 | 99.99th=[52167] 00:15:04.225 write: IOPS=4144, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1003msec); 0 zone resets 00:15:04.225 slat (nsec): min=1512, max=5339.4k, avg=108054.49, stdev=559999.78 00:15:04.225 clat (usec): min=1375, max=24130, avg=14047.78, stdev=3339.53 00:15:04.225 lat (usec): min=4133, max=24140, avg=14155.84, stdev=3322.14 00:15:04.225 clat percentiles (usec): 00:15:04.225 | 1.00th=[ 4621], 5.00th=[ 7832], 10.00th=[ 9765], 20.00th=[11076], 00:15:04.225 | 30.00th=[13173], 40.00th=[13435], 50.00th=[14091], 60.00th=[15008], 00:15:04.225 | 70.00th=[15926], 80.00th=[16581], 90.00th=[17957], 95.00th=[19006], 00:15:04.225 | 99.00th=[21103], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:15:04.225 | 99.99th=[24249] 00:15:04.225 bw ( KiB/s): min=15488, max=17280, per=16.00%, avg=16384.00, stdev=1267.14, samples=2 00:15:04.225 iops : min= 3872, max= 4320, avg=4096.00, stdev=316.78, samples=2 00:15:04.225 lat (msec) : 2=0.01%, 10=7.74%, 20=88.21%, 50=4.02%, 100=0.01% 00:15:04.225 cpu : usr=2.99%, sys=5.39%, ctx=296, majf=0, minf=1 00:15:04.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:15:04.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:04.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:04.225 issued rwts: total=4096,4157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:04.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:04.225 00:15:04.225 Run status group 0 (all jobs): 00:15:04.225 READ: bw=96.8MiB/s (101MB/s), 16.0MiB/s-35.1MiB/s (16.7MB/s-36.8MB/s), io=97.2MiB (102MB), run=1003-1004msec 00:15:04.225 WRITE: bw=100.0MiB/s (105MB/s), 16.2MiB/s-35.9MiB/s (17.0MB/s-37.6MB/s), io=100MiB (105MB), run=1003-1004msec 00:15:04.225 00:15:04.225 Disk stats (read/write): 00:15:04.225 nvme0n1: ios=5998/6144, merge=0/0, ticks=17414/16332, in_queue=33746, util=85.77% 00:15:04.225 nvme0n2: ios=7468/7680, merge=0/0, ticks=53856/47615, in_queue=101471, util=96.84% 00:15:04.225 nvme0n3: ios=3961/4096, merge=0/0, ticks=22611/21675, in_queue=44286, util=88.29% 00:15:04.225 nvme0n4: ios=3296/3584, merge=0/0, ticks=15946/14546, in_queue=30492, util=89.42% 00:15:04.225 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:15:04.225 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1225253 00:15:04.225 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:15:04.225 11:58:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:15:04.225 [global] 00:15:04.225 thread=1 00:15:04.225 invalidate=1 00:15:04.225 rw=read 00:15:04.225 time_based=1 00:15:04.225 runtime=10 00:15:04.225 ioengine=libaio 00:15:04.225 direct=1 00:15:04.225 bs=4096 00:15:04.225 iodepth=1 00:15:04.225 norandommap=1 00:15:04.225 numjobs=1 00:15:04.225 00:15:04.225 [job0] 00:15:04.225 filename=/dev/nvme0n1 00:15:04.225 [job1] 00:15:04.225 filename=/dev/nvme0n2 00:15:04.225 [job2] 00:15:04.225 filename=/dev/nvme0n3 00:15:04.225 [job3] 00:15:04.225 filename=/dev/nvme0n4 00:15:04.225 Could not set queue depth (nvme0n1) 00:15:04.225 Could not set queue depth (nvme0n2) 00:15:04.225 Could not set queue depth (nvme0n3) 00:15:04.225 Could not set queue depth (nvme0n4) 00:15:04.486 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.486 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.486 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.486 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:04.486 fio-3.35 00:15:04.486 Starting 4 threads 00:15:07.029 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:15:07.289 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=262144, buflen=4096 00:15:07.289 fio: pid=1225629, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:07.289 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:15:07.550 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=937984, buflen=4096 00:15:07.550 fio: pid=1225623, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:07.550 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.550 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:15:07.550 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=294912, buflen=4096 00:15:07.550 fio: pid=1225592, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:07.550 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.811 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:15:07.811 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=307200, buflen=4096 00:15:07.811 fio: pid=1225614, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:15:07.811 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:07.811 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:15:07.811 00:15:07.811 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1225592: Thu Dec 5 11:58:32 2024 00:15:07.811 read: IOPS=25, BW=98.7KiB/s (101kB/s)(288KiB/2917msec) 00:15:07.811 slat (usec): min=19, max=110, avg=26.56, stdev=10.11 00:15:07.811 clat (usec): min=1042, max=41963, avg=40465.86, stdev=4717.09 00:15:07.811 lat (usec): min=1082, max=41988, avg=40492.43, stdev=4715.53 00:15:07.811 clat percentiles (usec): 00:15:07.811 | 1.00th=[ 1045], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:15:07.811 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:07.811 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:15:07.811 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:07.811 | 99.99th=[42206] 00:15:07.811 bw ( KiB/s): min= 96, max= 104, per=17.12%, avg=97.60, stdev= 3.58, samples=5 00:15:07.811 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:15:07.811 lat (msec) : 2=1.37%, 50=97.26% 00:15:07.811 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=1 00:15:07.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.811 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.811 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.811 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1225614: Thu Dec 5 11:58:32 2024 00:15:07.811 read: IOPS=24, BW=96.6KiB/s (98.9kB/s)(300KiB/3106msec) 00:15:07.811 slat (nsec): min=25731, max=61013, avg=27108.25, stdev=5340.29 00:15:07.811 clat (usec): min=1060, max=42064, avg=41364.85, stdev=4722.14 00:15:07.811 lat (usec): min=1095, max=42090, avg=41391.95, stdev=4721.19 00:15:07.811 clat percentiles (usec): 00:15:07.811 | 1.00th=[ 1057], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:15:07.811 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:15:07.811 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:07.811 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:07.811 | 99.99th=[42206] 00:15:07.811 bw ( KiB/s): min= 96, max= 96, per=16.94%, avg=96.00, stdev= 0.00, samples=6 00:15:07.811 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=6 00:15:07.811 lat (msec) : 2=1.32%, 50=97.37% 00:15:07.811 cpu : usr=0.13%, sys=0.00%, ctx=78, majf=0, minf=2 00:15:07.811 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.811 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.811 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.811 issued rwts: total=76,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.811 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.811 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1225623: Thu Dec 5 11:58:32 2024 00:15:07.811 read: IOPS=83, BW=331KiB/s (339kB/s)(916KiB/2764msec) 00:15:07.811 slat (usec): min=7, max=8684, avg=61.58, stdev=571.08 00:15:07.811 clat (usec): min=313, max=41556, avg=11997.20, stdev=18116.73 00:15:07.811 lat (usec): min=339, max=49957, avg=12058.94, stdev=18188.56 00:15:07.811 clat percentiles (usec): 00:15:07.811 | 1.00th=[ 330], 5.00th=[ 578], 10.00th=[ 668], 20.00th=[ 717], 00:15:07.811 | 30.00th=[ 758], 40.00th=[ 775], 50.00th=[ 783], 60.00th=[ 799], 00:15:07.811 | 70.00th=[ 857], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:07.811 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:15:07.811 | 99.99th=[41681] 00:15:07.811 bw ( KiB/s): min= 96, max= 1256, per=62.65%, avg=355.20, stdev=504.62, samples=5 00:15:07.811 iops : min= 24, max= 314, avg=88.80, stdev=126.16, samples=5 00:15:07.811 lat (usec) : 500=2.17%, 750=24.35%, 1000=45.22% 00:15:07.811 lat (msec) : 50=27.83% 00:15:07.811 cpu : usr=0.04%, sys=0.25%, ctx=231, majf=0, minf=2 00:15:07.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.812 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.812 issued rwts: total=230,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.812 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1225629: Thu Dec 5 11:58:32 2024 00:15:07.812 read: IOPS=25, BW=99.0KiB/s (101kB/s)(256KiB/2585msec) 00:15:07.812 slat (nsec): min=9556, max=40801, avg=25789.46, stdev=2838.04 00:15:07.812 clat (usec): min=780, max=41040, avg=40339.06, stdev=5023.45 00:15:07.812 lat (usec): min=821, max=41065, avg=40364.84, stdev=5021.54 00:15:07.812 clat percentiles (usec): 00:15:07.812 | 1.00th=[ 783], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:15:07.812 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:15:07.812 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:15:07.812 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:15:07.812 | 99.99th=[41157] 00:15:07.812 bw ( KiB/s): min= 96, max= 104, per=17.47%, avg=99.20, stdev= 4.38, samples=5 00:15:07.812 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:15:07.812 lat (usec) : 1000=1.54% 00:15:07.812 lat (msec) : 50=96.92% 00:15:07.812 cpu : usr=0.12%, sys=0.00%, ctx=65, majf=0, minf=2 00:15:07.812 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.812 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.812 complete : 0=1.5%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.812 issued rwts: total=65,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.812 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:07.812 00:15:07.812 Run status group 0 (all jobs): 00:15:07.812 READ: bw=567KiB/s (580kB/s), 96.6KiB/s-331KiB/s (98.9kB/s-339kB/s), io=1760KiB (1802kB), run=2585-3106msec 00:15:07.812 00:15:07.812 Disk stats (read/write): 00:15:07.812 nvme0n1: ios=69/0, merge=0/0, ticks=2793/0, in_queue=2793, util=94.66% 00:15:07.812 nvme0n2: ios=74/0, merge=0/0, ticks=3063/0, in_queue=3063, util=95.63% 00:15:07.812 nvme0n3: ios=224/0, merge=0/0, ticks=2543/0, in_queue=2543, util=96.03% 00:15:07.812 nvme0n4: ios=63/0, merge=0/0, ticks=2543/0, in_queue=2543, util=96.42% 00:15:08.072 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.072 11:58:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:15:08.334 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.334 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:15:08.334 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.334 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:15:08.594 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:15:08.594 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1225253 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:08.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:15:08.855 nvmf hotplug test: fio failed as expected 00:15:08.855 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:09.115 11:58:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:09.115 rmmod nvme_tcp 00:15:09.115 rmmod nvme_fabrics 00:15:09.115 rmmod nvme_keyring 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 1221738 ']' 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 1221738 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1221738 ']' 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1221738 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1221738 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1221738' 00:15:09.115 killing process with pid 1221738 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1221738 00:15:09.115 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1221738 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:09.376 11:58:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:15:11.288 00:15:11.288 real 0m29.647s 00:15:11.288 user 2m35.856s 00:15:11.288 sys 0m9.368s 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:11.288 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.288 ************************************ 00:15:11.288 END TEST nvmf_fio_target 00:15:11.288 ************************************ 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:11.550 ************************************ 00:15:11.550 START TEST nvmf_bdevio 00:15:11.550 ************************************ 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:15:11.550 * Looking for test storage... 00:15:11.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:15:11.550 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.811 --rc genhtml_branch_coverage=1 00:15:11.811 --rc genhtml_function_coverage=1 00:15:11.811 --rc genhtml_legend=1 00:15:11.811 --rc geninfo_all_blocks=1 00:15:11.811 --rc geninfo_unexecuted_blocks=1 00:15:11.811 00:15:11.811 ' 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.811 --rc genhtml_branch_coverage=1 00:15:11.811 --rc genhtml_function_coverage=1 00:15:11.811 --rc genhtml_legend=1 00:15:11.811 --rc geninfo_all_blocks=1 00:15:11.811 --rc geninfo_unexecuted_blocks=1 00:15:11.811 00:15:11.811 ' 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.811 --rc genhtml_branch_coverage=1 00:15:11.811 --rc genhtml_function_coverage=1 00:15:11.811 --rc genhtml_legend=1 00:15:11.811 --rc geninfo_all_blocks=1 00:15:11.811 --rc geninfo_unexecuted_blocks=1 00:15:11.811 00:15:11.811 ' 00:15:11.811 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:11.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:11.811 --rc genhtml_branch_coverage=1 00:15:11.811 --rc genhtml_function_coverage=1 00:15:11.811 --rc genhtml_legend=1 00:15:11.812 --rc geninfo_all_blocks=1 00:15:11.812 --rc geninfo_unexecuted_blocks=1 00:15:11.812 00:15:11.812 ' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:11.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:15:11.812 11:58:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:20.061 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:20.061 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:20.061 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:20.061 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:20.061 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:20.062 10.0.0.1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:20.062 10.0.0.2 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:20.062 11:58:43 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:20.062 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:20.063 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:20.063 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.653 ms 00:15:20.063 00:15:20.063 --- 10.0.0.1 ping statistics --- 00:15:20.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.063 rtt min/avg/max/mdev = 0.653/0.653/0.653/0.000 ms 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:20.063 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:20.063 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:15:20.063 00:15:20.063 --- 10.0.0.2 ping statistics --- 00:15:20.063 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:20.063 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:15:20.063 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:15:20.064 ' 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=1230825 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 1230825 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1230825 ']' 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:20.064 11:58:44 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.064 [2024-12-05 11:58:44.338466] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:15:20.064 [2024-12-05 11:58:44.338531] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.064 [2024-12-05 11:58:44.438552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.064 [2024-12-05 11:58:44.491077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.064 [2024-12-05 11:58:44.491135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.064 [2024-12-05 11:58:44.491143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.064 [2024-12-05 11:58:44.491150] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.064 [2024-12-05 11:58:44.491157] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.064 [2024-12-05 11:58:44.493211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:15:20.064 [2024-12-05 11:58:44.493375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:15:20.064 [2024-12-05 11:58:44.493533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:15:20.064 [2024-12-05 11:58:44.493534] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.325 [2024-12-05 11:58:45.210705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.325 Malloc0 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:20.325 [2024-12-05 11:58:45.288127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:15:20.325 { 00:15:20.325 "params": { 00:15:20.325 "name": "Nvme$subsystem", 00:15:20.325 "trtype": "$TEST_TRANSPORT", 00:15:20.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:20.325 "adrfam": "ipv4", 00:15:20.325 "trsvcid": "$NVMF_PORT", 00:15:20.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:20.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:20.325 "hdgst": ${hdgst:-false}, 00:15:20.325 "ddgst": ${ddgst:-false} 00:15:20.325 }, 00:15:20.325 "method": "bdev_nvme_attach_controller" 00:15:20.325 } 00:15:20.325 EOF 00:15:20.325 )") 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:15:20.325 11:58:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:15:20.325 "params": { 00:15:20.325 "name": "Nvme1", 00:15:20.325 "trtype": "tcp", 00:15:20.325 "traddr": "10.0.0.2", 00:15:20.325 "adrfam": "ipv4", 00:15:20.325 "trsvcid": "4420", 00:15:20.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:20.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:20.325 "hdgst": false, 00:15:20.325 "ddgst": false 00:15:20.325 }, 00:15:20.325 "method": "bdev_nvme_attach_controller" 00:15:20.325 }' 00:15:20.325 [2024-12-05 11:58:45.351108] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:15:20.325 [2024-12-05 11:58:45.351198] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231036 ] 00:15:20.607 [2024-12-05 11:58:45.446909] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:20.607 [2024-12-05 11:58:45.504119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.607 [2024-12-05 11:58:45.504280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.607 [2024-12-05 11:58:45.504280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.867 I/O targets: 00:15:20.867 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:15:20.867 00:15:20.867 00:15:20.867 CUnit - A unit testing framework for C - Version 2.1-3 00:15:20.867 http://cunit.sourceforge.net/ 00:15:20.867 00:15:20.867 00:15:20.867 Suite: bdevio tests on: Nvme1n1 00:15:20.867 Test: blockdev write read block ...passed 00:15:20.867 Test: blockdev write zeroes read block ...passed 00:15:20.867 Test: blockdev write zeroes read no split ...passed 00:15:20.867 Test: blockdev write zeroes read split ...passed 00:15:20.867 Test: blockdev write zeroes read split partial ...passed 00:15:20.867 Test: blockdev reset ...[2024-12-05 11:58:45.803799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:15:20.867 [2024-12-05 11:58:45.803898] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1944970 (9): Bad file descriptor 00:15:20.867 [2024-12-05 11:58:45.818119] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:15:20.867 passed 00:15:20.867 Test: blockdev write read 8 blocks ...passed 00:15:20.867 Test: blockdev write read size > 128k ...passed 00:15:20.867 Test: blockdev write read invalid size ...passed 00:15:20.867 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:20.867 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:20.867 Test: blockdev write read max offset ...passed 00:15:21.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:21.127 Test: blockdev writev readv 8 blocks ...passed 00:15:21.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:21.127 Test: blockdev writev readv block ...passed 00:15:21.127 Test: blockdev writev readv size > 128k ...passed 00:15:21.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:21.127 Test: blockdev comparev and writev ...[2024-12-05 11:58:46.085896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.085949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.085966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.085975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.086518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.086542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.086557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.086565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.087149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.087165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.087180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.087189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.087782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.127 [2024-12-05 11:58:46.087797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:15:21.127 [2024-12-05 11:58:46.087811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:15:21.128 [2024-12-05 11:58:46.087819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:15:21.128 passed 00:15:21.128 Test: blockdev nvme passthru rw ...passed 00:15:21.128 Test: blockdev nvme passthru vendor specific ...[2024-12-05 11:58:46.171399] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:21.128 [2024-12-05 11:58:46.171417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:15:21.128 [2024-12-05 11:58:46.171801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:21.128 [2024-12-05 11:58:46.171816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:15:21.128 [2024-12-05 11:58:46.172181] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:21.128 [2024-12-05 11:58:46.172195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:15:21.128 [2024-12-05 11:58:46.172573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:15:21.128 [2024-12-05 11:58:46.172587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:15:21.128 passed 00:15:21.389 Test: blockdev nvme admin passthru ...passed 00:15:21.389 Test: blockdev copy ...passed 00:15:21.389 00:15:21.389 Run Summary: Type Total Ran Passed Failed Inactive 00:15:21.389 suites 1 1 n/a 0 0 00:15:21.389 tests 23 23 23 0 0 00:15:21.389 asserts 152 152 152 0 n/a 00:15:21.389 00:15:21.389 Elapsed time = 1.119 seconds 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:21.389 rmmod nvme_tcp 00:15:21.389 rmmod nvme_fabrics 00:15:21.389 rmmod nvme_keyring 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 1230825 ']' 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 1230825 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1230825 ']' 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1230825 00:15:21.389 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:15:21.650 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1230825 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1230825' 00:15:21.651 killing process with pid 1230825 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1230825 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1230825 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:21.651 11:58:46 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:15:24.196 00:15:24.196 real 0m12.353s 00:15:24.196 user 0m13.045s 00:15:24.196 sys 0m6.287s 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:15:24.196 ************************************ 00:15:24.196 END TEST nvmf_bdevio 00:15:24.196 ************************************ 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:15:24.196 00:15:24.196 real 5m6.693s 00:15:24.196 user 11m49.364s 00:15:24.196 sys 1m53.736s 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:15:24.196 ************************************ 00:15:24.196 END TEST nvmf_target_core 00:15:24.196 ************************************ 00:15:24.196 11:58:48 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:24.196 11:58:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.196 11:58:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.196 11:58:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:24.196 ************************************ 00:15:24.196 START TEST nvmf_target_extra 00:15:24.196 ************************************ 00:15:24.196 11:58:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:15:24.196 * Looking for test storage... 00:15:24.196 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.196 --rc genhtml_branch_coverage=1 00:15:24.196 --rc genhtml_function_coverage=1 00:15:24.196 --rc genhtml_legend=1 00:15:24.196 --rc geninfo_all_blocks=1 00:15:24.196 --rc geninfo_unexecuted_blocks=1 00:15:24.196 00:15:24.196 ' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.196 --rc genhtml_branch_coverage=1 00:15:24.196 --rc genhtml_function_coverage=1 00:15:24.196 --rc genhtml_legend=1 00:15:24.196 --rc geninfo_all_blocks=1 00:15:24.196 --rc geninfo_unexecuted_blocks=1 00:15:24.196 00:15:24.196 ' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.196 --rc genhtml_branch_coverage=1 00:15:24.196 --rc genhtml_function_coverage=1 00:15:24.196 --rc genhtml_legend=1 00:15:24.196 --rc geninfo_all_blocks=1 00:15:24.196 --rc geninfo_unexecuted_blocks=1 00:15:24.196 00:15:24.196 ' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:24.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.196 --rc genhtml_branch_coverage=1 00:15:24.196 --rc genhtml_function_coverage=1 00:15:24.196 --rc genhtml_legend=1 00:15:24.196 --rc geninfo_all_blocks=1 00:15:24.196 --rc geninfo_unexecuted_blocks=1 00:15:24.196 00:15:24.196 ' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.196 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:24.197 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:24.197 ************************************ 00:15:24.197 START TEST nvmf_example 00:15:24.197 ************************************ 00:15:24.197 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:15:24.458 * Looking for test storage... 00:15:24.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:24.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.458 --rc genhtml_branch_coverage=1 00:15:24.458 --rc genhtml_function_coverage=1 00:15:24.458 --rc genhtml_legend=1 00:15:24.458 --rc geninfo_all_blocks=1 00:15:24.458 --rc geninfo_unexecuted_blocks=1 00:15:24.458 00:15:24.458 ' 00:15:24.458 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:24.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.458 --rc genhtml_branch_coverage=1 00:15:24.458 --rc genhtml_function_coverage=1 00:15:24.458 --rc genhtml_legend=1 00:15:24.458 --rc geninfo_all_blocks=1 00:15:24.458 --rc geninfo_unexecuted_blocks=1 00:15:24.458 00:15:24.458 ' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:24.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.459 --rc genhtml_branch_coverage=1 00:15:24.459 --rc genhtml_function_coverage=1 00:15:24.459 --rc genhtml_legend=1 00:15:24.459 --rc geninfo_all_blocks=1 00:15:24.459 --rc geninfo_unexecuted_blocks=1 00:15:24.459 00:15:24.459 ' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:24.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:24.459 --rc genhtml_branch_coverage=1 00:15:24.459 --rc genhtml_function_coverage=1 00:15:24.459 --rc genhtml_legend=1 00:15:24.459 --rc geninfo_all_blocks=1 00:15:24.459 --rc geninfo_unexecuted_blocks=1 00:15:24.459 00:15:24.459 ' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:24.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:15:24.459 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:32.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:32.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:32.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.602 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:32.603 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@247 -- # create_target_ns 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:32.603 10.0.0.1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:32.603 10.0.0.2 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:32.603 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:32.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.654 ms 00:15:32.604 00:15:32.604 --- 10.0.0.1 ping statistics --- 00:15:32.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.604 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:32.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:15:32.604 00:15:32.604 --- 10.0.0.2 ping statistics --- 00:15:32.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.604 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:32.604 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:32.604 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:15:32.605 ' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1235611 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1235611 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1235611 ']' 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:32.605 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:33.175 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:15:33.175 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:15:33.175 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:33.175 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:15:33.175 11:58:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:45.409 Initializing NVMe Controllers 00:15:45.409 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:45.409 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:45.409 Initialization complete. Launching workers. 00:15:45.409 ======================================================== 00:15:45.409 Latency(us) 00:15:45.409 Device Information : IOPS MiB/s Average min max 00:15:45.409 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19488.50 76.13 3283.82 602.62 15927.99 00:15:45.409 ======================================================== 00:15:45.409 Total : 19488.50 76.13 3283.82 602.62 15927.99 00:15:45.409 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:45.409 rmmod nvme_tcp 00:15:45.409 rmmod nvme_fabrics 00:15:45.409 rmmod nvme_keyring 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 1235611 ']' 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 1235611 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1235611 ']' 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1235611 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1235611 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1235611' 00:15:45.409 killing process with pid 1235611 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1235611 00:15:45.409 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1235611 00:15:45.409 nvmf threads initialize successfully 00:15:45.409 bdev subsystem init successfully 00:15:45.409 created a nvmf target service 00:15:45.409 create targets's poll groups done 00:15:45.409 all subsystems of target started 00:15:45.409 nvmf target is running 00:15:45.409 all subsystems of target stopped 00:15:45.409 destroy targets's poll groups done 00:15:45.409 destroyed the nvmf target service 00:15:45.409 bdev subsystem finish successfully 00:15:45.410 nvmf threads destroy successfully 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@254 -- # local dev 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:45.410 11:59:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # return 0 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@274 -- # iptr 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-save 00:15:45.981 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-restore 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:45.982 00:15:45.982 real 0m21.679s 00:15:45.982 user 0m47.207s 00:15:45.982 sys 0m7.059s 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:15:45.982 ************************************ 00:15:45.982 END TEST nvmf_example 00:15:45.982 ************************************ 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:45.982 ************************************ 00:15:45.982 START TEST nvmf_filesystem 00:15:45.982 ************************************ 00:15:45.982 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:15:46.246 * Looking for test storage... 00:15:46.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.246 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:46.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.246 --rc genhtml_branch_coverage=1 00:15:46.247 --rc genhtml_function_coverage=1 00:15:46.247 --rc genhtml_legend=1 00:15:46.247 --rc geninfo_all_blocks=1 00:15:46.247 --rc geninfo_unexecuted_blocks=1 00:15:46.247 00:15:46.247 ' 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:46.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.247 --rc genhtml_branch_coverage=1 00:15:46.247 --rc genhtml_function_coverage=1 00:15:46.247 --rc genhtml_legend=1 00:15:46.247 --rc geninfo_all_blocks=1 00:15:46.247 --rc geninfo_unexecuted_blocks=1 00:15:46.247 00:15:46.247 ' 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:46.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.247 --rc genhtml_branch_coverage=1 00:15:46.247 --rc genhtml_function_coverage=1 00:15:46.247 --rc genhtml_legend=1 00:15:46.247 --rc geninfo_all_blocks=1 00:15:46.247 --rc geninfo_unexecuted_blocks=1 00:15:46.247 00:15:46.247 ' 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:46.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.247 --rc genhtml_branch_coverage=1 00:15:46.247 --rc genhtml_function_coverage=1 00:15:46.247 --rc genhtml_legend=1 00:15:46.247 --rc geninfo_all_blocks=1 00:15:46.247 --rc geninfo_unexecuted_blocks=1 00:15:46.247 00:15:46.247 ' 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:15:46.247 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:15:46.248 #define SPDK_CONFIG_H 00:15:46.248 #define SPDK_CONFIG_AIO_FSDEV 1 00:15:46.248 #define SPDK_CONFIG_APPS 1 00:15:46.248 #define SPDK_CONFIG_ARCH native 00:15:46.248 #undef SPDK_CONFIG_ASAN 00:15:46.248 #undef SPDK_CONFIG_AVAHI 00:15:46.248 #undef SPDK_CONFIG_CET 00:15:46.248 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:15:46.248 #define SPDK_CONFIG_COVERAGE 1 00:15:46.248 #define SPDK_CONFIG_CROSS_PREFIX 00:15:46.248 #undef SPDK_CONFIG_CRYPTO 00:15:46.248 #undef SPDK_CONFIG_CRYPTO_MLX5 00:15:46.248 #undef SPDK_CONFIG_CUSTOMOCF 00:15:46.248 #undef SPDK_CONFIG_DAOS 00:15:46.248 #define SPDK_CONFIG_DAOS_DIR 00:15:46.248 #define SPDK_CONFIG_DEBUG 1 00:15:46.248 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:15:46.248 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:15:46.248 #define SPDK_CONFIG_DPDK_INC_DIR 00:15:46.248 #define SPDK_CONFIG_DPDK_LIB_DIR 00:15:46.248 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:15:46.248 #undef SPDK_CONFIG_DPDK_UADK 00:15:46.248 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:15:46.248 #define SPDK_CONFIG_EXAMPLES 1 00:15:46.248 #undef SPDK_CONFIG_FC 00:15:46.248 #define SPDK_CONFIG_FC_PATH 00:15:46.248 #define SPDK_CONFIG_FIO_PLUGIN 1 00:15:46.248 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:15:46.248 #define SPDK_CONFIG_FSDEV 1 00:15:46.248 #undef SPDK_CONFIG_FUSE 00:15:46.248 #undef SPDK_CONFIG_FUZZER 00:15:46.248 #define SPDK_CONFIG_FUZZER_LIB 00:15:46.248 #undef SPDK_CONFIG_GOLANG 00:15:46.248 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:15:46.248 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:15:46.248 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:15:46.248 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:15:46.248 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:15:46.248 #undef SPDK_CONFIG_HAVE_LIBBSD 00:15:46.248 #undef SPDK_CONFIG_HAVE_LZ4 00:15:46.248 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:15:46.248 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:15:46.248 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:15:46.248 #define SPDK_CONFIG_IDXD 1 00:15:46.248 #define SPDK_CONFIG_IDXD_KERNEL 1 00:15:46.248 #undef SPDK_CONFIG_IPSEC_MB 00:15:46.248 #define SPDK_CONFIG_IPSEC_MB_DIR 00:15:46.248 #define SPDK_CONFIG_ISAL 1 00:15:46.248 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:15:46.248 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:15:46.248 #define SPDK_CONFIG_LIBDIR 00:15:46.248 #undef SPDK_CONFIG_LTO 00:15:46.248 #define SPDK_CONFIG_MAX_LCORES 128 00:15:46.248 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:15:46.248 #define SPDK_CONFIG_NVME_CUSE 1 00:15:46.248 #undef SPDK_CONFIG_OCF 00:15:46.248 #define SPDK_CONFIG_OCF_PATH 00:15:46.248 #define SPDK_CONFIG_OPENSSL_PATH 00:15:46.248 #undef SPDK_CONFIG_PGO_CAPTURE 00:15:46.248 #define SPDK_CONFIG_PGO_DIR 00:15:46.248 #undef SPDK_CONFIG_PGO_USE 00:15:46.248 #define SPDK_CONFIG_PREFIX /usr/local 00:15:46.248 #undef SPDK_CONFIG_RAID5F 00:15:46.248 #undef SPDK_CONFIG_RBD 00:15:46.248 #define SPDK_CONFIG_RDMA 1 00:15:46.248 #define SPDK_CONFIG_RDMA_PROV verbs 00:15:46.248 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:15:46.248 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:15:46.248 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:15:46.248 #define SPDK_CONFIG_SHARED 1 00:15:46.248 #undef SPDK_CONFIG_SMA 00:15:46.248 #define SPDK_CONFIG_TESTS 1 00:15:46.248 #undef SPDK_CONFIG_TSAN 00:15:46.248 #define SPDK_CONFIG_UBLK 1 00:15:46.248 #define SPDK_CONFIG_UBSAN 1 00:15:46.248 #undef SPDK_CONFIG_UNIT_TESTS 00:15:46.248 #undef SPDK_CONFIG_URING 00:15:46.248 #define SPDK_CONFIG_URING_PATH 00:15:46.248 #undef SPDK_CONFIG_URING_ZNS 00:15:46.248 #undef SPDK_CONFIG_USDT 00:15:46.248 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:15:46.248 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:15:46.248 #define SPDK_CONFIG_VFIO_USER 1 00:15:46.248 #define SPDK_CONFIG_VFIO_USER_DIR 00:15:46.248 #define SPDK_CONFIG_VHOST 1 00:15:46.248 #define SPDK_CONFIG_VIRTIO 1 00:15:46.248 #undef SPDK_CONFIG_VTUNE 00:15:46.248 #define SPDK_CONFIG_VTUNE_DIR 00:15:46.248 #define SPDK_CONFIG_WERROR 1 00:15:46.248 #define SPDK_CONFIG_WPDK_DIR 00:15:46.248 #undef SPDK_CONFIG_XNVME 00:15:46.248 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:15:46.248 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:15:46.249 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:46.250 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j144 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1238406 ]] 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1238406 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.ooP070 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ooP070/tests/target /tmp/spdk.ooP070 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=122665885696 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=129356529664 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6690643968 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64668233728 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=25847943168 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=25871306752 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23363584 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=efivarfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=efivarfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=216064 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=507904 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=287744 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=64677683200 00:15:46.251 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=64678264832 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=581632 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12935639040 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12935651328 00:15:46.252 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:15:46.514 * Looking for test storage... 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=122665885696 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8905236480 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:15:46.514 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:46.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.515 --rc genhtml_branch_coverage=1 00:15:46.515 --rc genhtml_function_coverage=1 00:15:46.515 --rc genhtml_legend=1 00:15:46.515 --rc geninfo_all_blocks=1 00:15:46.515 --rc geninfo_unexecuted_blocks=1 00:15:46.515 00:15:46.515 ' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:46.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.515 --rc genhtml_branch_coverage=1 00:15:46.515 --rc genhtml_function_coverage=1 00:15:46.515 --rc genhtml_legend=1 00:15:46.515 --rc geninfo_all_blocks=1 00:15:46.515 --rc geninfo_unexecuted_blocks=1 00:15:46.515 00:15:46.515 ' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:46.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.515 --rc genhtml_branch_coverage=1 00:15:46.515 --rc genhtml_function_coverage=1 00:15:46.515 --rc genhtml_legend=1 00:15:46.515 --rc geninfo_all_blocks=1 00:15:46.515 --rc geninfo_unexecuted_blocks=1 00:15:46.515 00:15:46.515 ' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:46.515 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:46.515 --rc genhtml_branch_coverage=1 00:15:46.515 --rc genhtml_function_coverage=1 00:15:46.515 --rc genhtml_legend=1 00:15:46.515 --rc geninfo_all_blocks=1 00:15:46.515 --rc geninfo_unexecuted_blocks=1 00:15:46.515 00:15:46.515 ' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:46.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:15:46.515 11:59:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:54.660 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:54.660 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:54.660 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:54.661 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:54.661 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@247 -- # create_target_ns 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:54.661 10.0.0.1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:54.661 10.0.0.2 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:54.661 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:54.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:54.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.622 ms 00:15:54.662 00:15:54.662 --- 10.0.0.1 ping statistics --- 00:15:54.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.662 rtt min/avg/max/mdev = 0.622/0.622/0.622/0.000 ms 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:54.662 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:54.662 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:15:54.662 00:15:54.662 --- 10.0.0.2 ping statistics --- 00:15:54.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:54.662 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:54.662 11:59:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:54.662 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:15:54.663 ' 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:15:54.663 ************************************ 00:15:54.663 START TEST nvmf_filesystem_no_in_capsule 00:15:54.663 ************************************ 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1242373 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1242373 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1242373 ']' 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:54.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:54.663 11:59:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:54.663 [2024-12-05 11:59:19.190265] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:15:54.663 [2024-12-05 11:59:19.190327] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:54.663 [2024-12-05 11:59:19.292991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:54.663 [2024-12-05 11:59:19.346418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:54.663 [2024-12-05 11:59:19.346484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:54.663 [2024-12-05 11:59:19.346493] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:54.663 [2024-12-05 11:59:19.346500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:54.663 [2024-12-05 11:59:19.346507] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:54.663 [2024-12-05 11:59:19.348935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.663 [2024-12-05 11:59:19.349096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:54.663 [2024-12-05 11:59:19.349222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.663 [2024-12-05 11:59:19.349223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 [2024-12-05 11:59:20.071064] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 [2024-12-05 11:59:20.250179] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.235 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:15:55.235 { 00:15:55.235 "name": "Malloc1", 00:15:55.235 "aliases": [ 00:15:55.235 "2d1c1a59-d925-4e37-86d4-e89ea5b58978" 00:15:55.235 ], 00:15:55.235 "product_name": "Malloc disk", 00:15:55.235 "block_size": 512, 00:15:55.235 "num_blocks": 1048576, 00:15:55.235 "uuid": "2d1c1a59-d925-4e37-86d4-e89ea5b58978", 00:15:55.235 "assigned_rate_limits": { 00:15:55.235 "rw_ios_per_sec": 0, 00:15:55.235 "rw_mbytes_per_sec": 0, 00:15:55.235 "r_mbytes_per_sec": 0, 00:15:55.235 "w_mbytes_per_sec": 0 00:15:55.235 }, 00:15:55.235 "claimed": true, 00:15:55.235 "claim_type": "exclusive_write", 00:15:55.235 "zoned": false, 00:15:55.235 "supported_io_types": { 00:15:55.235 "read": true, 00:15:55.235 "write": true, 00:15:55.235 "unmap": true, 00:15:55.235 "flush": true, 00:15:55.235 "reset": true, 00:15:55.235 "nvme_admin": false, 00:15:55.235 "nvme_io": false, 00:15:55.235 "nvme_io_md": false, 00:15:55.235 "write_zeroes": true, 00:15:55.235 "zcopy": true, 00:15:55.235 "get_zone_info": false, 00:15:55.235 "zone_management": false, 00:15:55.235 "zone_append": false, 00:15:55.235 "compare": false, 00:15:55.235 "compare_and_write": false, 00:15:55.235 "abort": true, 00:15:55.235 "seek_hole": false, 00:15:55.235 "seek_data": false, 00:15:55.235 "copy": true, 00:15:55.235 "nvme_iov_md": false 00:15:55.235 }, 00:15:55.235 "memory_domains": [ 00:15:55.235 { 00:15:55.235 "dma_device_id": "system", 00:15:55.235 "dma_device_type": 1 00:15:55.236 }, 00:15:55.236 { 00:15:55.236 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.236 "dma_device_type": 2 00:15:55.236 } 00:15:55.236 ], 00:15:55.236 "driver_specific": {} 00:15:55.236 } 00:15:55.236 ]' 00:15:55.236 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:15:55.496 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:56.883 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:15:56.883 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:15:56.883 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:56.883 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:56.883 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:15:59.428 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:15:59.428 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:15:59.689 11:59:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:00.630 ************************************ 00:16:00.630 START TEST filesystem_ext4 00:16:00.630 ************************************ 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:00.630 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:00.630 mke2fs 1.47.0 (5-Feb-2023) 00:16:00.630 Discarding device blocks: 0/522240 done 00:16:00.630 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:00.630 Filesystem UUID: ac0e7378-347c-41e8-8066-adef986ba154 00:16:00.630 Superblock backups stored on blocks: 00:16:00.630 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:00.630 00:16:00.630 Allocating group tables: 0/64 done 00:16:00.630 Writing inode tables: 0/64 done 00:16:00.890 Creating journal (8192 blocks): done 00:16:03.218 Writing superblocks and filesystem accounting information: 0/6428/64 done 00:16:03.218 00:16:03.218 11:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:03.218 11:59:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1242373 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:09.805 00:16:09.805 real 0m8.109s 00:16:09.805 user 0m0.031s 00:16:09.805 sys 0m0.051s 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:09.805 ************************************ 00:16:09.805 END TEST filesystem_ext4 00:16:09.805 ************************************ 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.805 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:09.805 ************************************ 00:16:09.805 START TEST filesystem_btrfs 00:16:09.805 ************************************ 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:09.806 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:09.806 btrfs-progs v6.8.1 00:16:09.806 See https://btrfs.readthedocs.io for more information. 00:16:09.806 00:16:09.806 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:09.806 NOTE: several default settings have changed in version 5.15, please make sure 00:16:09.806 this does not affect your deployments: 00:16:09.806 - DUP for metadata (-m dup) 00:16:09.806 - enabled no-holes (-O no-holes) 00:16:09.806 - enabled free-space-tree (-R free-space-tree) 00:16:09.806 00:16:09.806 Label: (null) 00:16:09.806 UUID: e9abb26d-7e1f-413f-b616-10fc328da8b4 00:16:09.806 Node size: 16384 00:16:09.806 Sector size: 4096 (CPU page size: 4096) 00:16:09.806 Filesystem size: 510.00MiB 00:16:09.806 Block group profiles: 00:16:09.806 Data: single 8.00MiB 00:16:09.806 Metadata: DUP 32.00MiB 00:16:09.806 System: DUP 8.00MiB 00:16:09.806 SSD detected: yes 00:16:09.806 Zoned device: no 00:16:09.806 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:09.806 Checksum: crc32c 00:16:09.806 Number of devices: 1 00:16:09.806 Devices: 00:16:09.806 ID SIZE PATH 00:16:09.806 1 510.00MiB /dev/nvme0n1p1 00:16:09.806 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1242373 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:09.806 00:16:09.806 real 0m0.699s 00:16:09.806 user 0m0.027s 00:16:09.806 sys 0m0.065s 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:09.806 ************************************ 00:16:09.806 END TEST filesystem_btrfs 00:16:09.806 ************************************ 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:09.806 ************************************ 00:16:09.806 START TEST filesystem_xfs 00:16:09.806 ************************************ 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:09.806 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:09.806 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:09.806 = sectsz=512 attr=2, projid32bit=1 00:16:09.806 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:09.806 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:09.806 data = bsize=4096 blocks=130560, imaxpct=25 00:16:09.806 = sunit=0 swidth=0 blks 00:16:09.806 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:09.806 log =internal log bsize=4096 blocks=16384, version=2 00:16:09.806 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:09.806 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:10.745 Discarding blocks...Done. 00:16:10.745 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:10.745 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1242373 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:12.653 00:16:12.653 real 0m3.025s 00:16:12.653 user 0m0.031s 00:16:12.653 sys 0m0.052s 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:12.653 ************************************ 00:16:12.653 END TEST filesystem_xfs 00:16:12.653 ************************************ 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:12.653 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:13.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1242373 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1242373 ']' 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1242373 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242373 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242373' 00:16:13.222 killing process with pid 1242373 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1242373 00:16:13.222 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1242373 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:13.482 00:16:13.482 real 0m19.339s 00:16:13.482 user 1m16.324s 00:16:13.482 sys 0m1.346s 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:13.482 ************************************ 00:16:13.482 END TEST nvmf_filesystem_no_in_capsule 00:16:13.482 ************************************ 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.482 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:13.742 ************************************ 00:16:13.742 START TEST nvmf_filesystem_in_capsule 00:16:13.742 ************************************ 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1246301 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1246301 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1246301 ']' 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:13.742 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:13.742 [2024-12-05 11:59:38.608943] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:16:13.742 [2024-12-05 11:59:38.608993] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.742 [2024-12-05 11:59:38.697779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.742 [2024-12-05 11:59:38.727763] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.742 [2024-12-05 11:59:38.727791] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.742 [2024-12-05 11:59:38.727798] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.742 [2024-12-05 11:59:38.727803] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.742 [2024-12-05 11:59:38.727807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.742 [2024-12-05 11:59:38.729028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.742 [2024-12-05 11:59:38.729180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.742 [2024-12-05 11:59:38.729330] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.742 [2024-12-05 11:59:38.729332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.683 [2024-12-05 11:59:39.454972] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.683 Malloc1 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:14.683 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.684 [2024-12-05 11:59:39.588832] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:16:14.684 { 00:16:14.684 "name": "Malloc1", 00:16:14.684 "aliases": [ 00:16:14.684 "a617a0b0-84e6-442a-b6bd-a2dc4c278d53" 00:16:14.684 ], 00:16:14.684 "product_name": "Malloc disk", 00:16:14.684 "block_size": 512, 00:16:14.684 "num_blocks": 1048576, 00:16:14.684 "uuid": "a617a0b0-84e6-442a-b6bd-a2dc4c278d53", 00:16:14.684 "assigned_rate_limits": { 00:16:14.684 "rw_ios_per_sec": 0, 00:16:14.684 "rw_mbytes_per_sec": 0, 00:16:14.684 "r_mbytes_per_sec": 0, 00:16:14.684 "w_mbytes_per_sec": 0 00:16:14.684 }, 00:16:14.684 "claimed": true, 00:16:14.684 "claim_type": "exclusive_write", 00:16:14.684 "zoned": false, 00:16:14.684 "supported_io_types": { 00:16:14.684 "read": true, 00:16:14.684 "write": true, 00:16:14.684 "unmap": true, 00:16:14.684 "flush": true, 00:16:14.684 "reset": true, 00:16:14.684 "nvme_admin": false, 00:16:14.684 "nvme_io": false, 00:16:14.684 "nvme_io_md": false, 00:16:14.684 "write_zeroes": true, 00:16:14.684 "zcopy": true, 00:16:14.684 "get_zone_info": false, 00:16:14.684 "zone_management": false, 00:16:14.684 "zone_append": false, 00:16:14.684 "compare": false, 00:16:14.684 "compare_and_write": false, 00:16:14.684 "abort": true, 00:16:14.684 "seek_hole": false, 00:16:14.684 "seek_data": false, 00:16:14.684 "copy": true, 00:16:14.684 "nvme_iov_md": false 00:16:14.684 }, 00:16:14.684 "memory_domains": [ 00:16:14.684 { 00:16:14.684 "dma_device_id": "system", 00:16:14.684 "dma_device_type": 1 00:16:14.684 }, 00:16:14.684 { 00:16:14.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:14.684 "dma_device_type": 2 00:16:14.684 } 00:16:14.684 ], 00:16:14.684 "driver_specific": {} 00:16:14.684 } 00:16:14.684 ]' 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:16:14.684 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:16.633 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:16:16.633 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:16:16.633 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:16.633 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:16.633 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:16:18.548 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:16:19.493 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:16:19.493 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:16:19.493 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:19.493 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:19.493 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:19.754 ************************************ 00:16:19.754 START TEST filesystem_in_capsule_ext4 00:16:19.754 ************************************ 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:16:19.754 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:16:19.754 mke2fs 1.47.0 (5-Feb-2023) 00:16:19.754 Discarding device blocks: 0/522240 done 00:16:19.754 Creating filesystem with 522240 1k blocks and 130560 inodes 00:16:19.754 Filesystem UUID: b451d913-9605-4cc5-acfe-1ad7bda8fa18 00:16:19.754 Superblock backups stored on blocks: 00:16:19.754 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:16:19.754 00:16:19.754 Allocating group tables: 0/64 done 00:16:19.754 Writing inode tables: 0/64 done 00:16:19.754 Creating journal (8192 blocks): done 00:16:22.079 Writing superblocks and filesystem accounting information: 0/64 done 00:16:22.079 00:16:22.079 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:16:22.079 11:59:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1246301 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:28.660 00:16:28.660 real 0m7.973s 00:16:28.660 user 0m0.028s 00:16:28.660 sys 0m0.056s 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.660 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:16:28.661 ************************************ 00:16:28.661 END TEST filesystem_in_capsule_ext4 00:16:28.661 ************************************ 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.661 ************************************ 00:16:28.661 START TEST filesystem_in_capsule_btrfs 00:16:28.661 ************************************ 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:16:28.661 btrfs-progs v6.8.1 00:16:28.661 See https://btrfs.readthedocs.io for more information. 00:16:28.661 00:16:28.661 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:16:28.661 NOTE: several default settings have changed in version 5.15, please make sure 00:16:28.661 this does not affect your deployments: 00:16:28.661 - DUP for metadata (-m dup) 00:16:28.661 - enabled no-holes (-O no-holes) 00:16:28.661 - enabled free-space-tree (-R free-space-tree) 00:16:28.661 00:16:28.661 Label: (null) 00:16:28.661 UUID: cf3a1db6-f5c2-42fd-b2ac-8a3eee285e60 00:16:28.661 Node size: 16384 00:16:28.661 Sector size: 4096 (CPU page size: 4096) 00:16:28.661 Filesystem size: 510.00MiB 00:16:28.661 Block group profiles: 00:16:28.661 Data: single 8.00MiB 00:16:28.661 Metadata: DUP 32.00MiB 00:16:28.661 System: DUP 8.00MiB 00:16:28.661 SSD detected: yes 00:16:28.661 Zoned device: no 00:16:28.661 Features: extref, skinny-metadata, no-holes, free-space-tree 00:16:28.661 Checksum: crc32c 00:16:28.661 Number of devices: 1 00:16:28.661 Devices: 00:16:28.661 ID SIZE PATH 00:16:28.661 1 510.00MiB /dev/nvme0n1p1 00:16:28.661 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:16:28.661 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1246301 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:28.661 00:16:28.661 real 0m0.944s 00:16:28.661 user 0m0.024s 00:16:28.661 sys 0m0.064s 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:16:28.661 ************************************ 00:16:28.661 END TEST filesystem_in_capsule_btrfs 00:16:28.661 ************************************ 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:28.661 ************************************ 00:16:28.661 START TEST filesystem_in_capsule_xfs 00:16:28.661 ************************************ 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:16:28.661 11:59:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:16:28.661 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:16:28.661 = sectsz=512 attr=2, projid32bit=1 00:16:28.661 = crc=1 finobt=1, sparse=1, rmapbt=0 00:16:28.661 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:16:28.661 data = bsize=4096 blocks=130560, imaxpct=25 00:16:28.661 = sunit=0 swidth=0 blks 00:16:28.661 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:16:28.661 log =internal log bsize=4096 blocks=16384, version=2 00:16:28.661 = sectsz=512 sunit=0 blks, lazy-count=1 00:16:28.661 realtime =none extsz=4096 blocks=0, rtextents=0 00:16:29.601 Discarding blocks...Done. 00:16:29.601 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:16:29.601 11:59:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:16:32.142 11:59:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1246301 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:16:32.142 00:16:32.142 real 0m3.406s 00:16:32.142 user 0m0.028s 00:16:32.142 sys 0m0.055s 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:16:32.142 ************************************ 00:16:32.142 END TEST filesystem_in_capsule_xfs 00:16:32.142 ************************************ 00:16:32.142 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:16:32.403 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:16:32.403 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:32.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:32.403 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:32.403 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:16:32.403 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:32.403 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1246301 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1246301 ']' 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1246301 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246301 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246301' 00:16:32.663 killing process with pid 1246301 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1246301 00:16:32.663 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1246301 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:16:32.924 00:16:32.924 real 0m19.205s 00:16:32.924 user 1m15.971s 00:16:32.924 sys 0m1.269s 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:16:32.924 ************************************ 00:16:32.924 END TEST nvmf_filesystem_in_capsule 00:16:32.924 ************************************ 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:32.924 rmmod nvme_tcp 00:16:32.924 rmmod nvme_fabrics 00:16:32.924 rmmod nvme_keyring 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@254 -- # local dev 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:32.924 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # return 0 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@274 -- # iptr 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-save 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-restore 00:16:35.575 00:16:35.575 real 0m49.015s 00:16:35.575 user 2m34.672s 00:16:35.575 sys 0m8.682s 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:16:35.575 ************************************ 00:16:35.575 END TEST nvmf_filesystem 00:16:35.575 ************************************ 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:35.575 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:35.575 ************************************ 00:16:35.575 START TEST nvmf_target_discovery 00:16:35.575 ************************************ 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:16:35.575 * Looking for test storage... 00:16:35.575 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:35.575 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:35.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:35.576 --rc genhtml_branch_coverage=1 00:16:35.576 --rc genhtml_function_coverage=1 00:16:35.576 --rc genhtml_legend=1 00:16:35.576 --rc geninfo_all_blocks=1 00:16:35.576 --rc geninfo_unexecuted_blocks=1 00:16:35.576 00:16:35.576 ' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:35.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:16:35.576 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:43.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:43.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:43.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:43.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:43.748 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:43.749 10.0.0.1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:43.749 10.0.0.2 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:43.749 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:43.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.642 ms 00:16:43.750 00:16:43.750 --- 10.0.0.1 ping statistics --- 00:16:43.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.750 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:43.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:16:43.750 00:16:43.750 --- 10.0.0.2 ping statistics --- 00:16:43.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.750 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:16:43.750 ' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:43.750 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=1254678 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 1254678 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1254678 ']' 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.751 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.751 [2024-12-05 12:00:08.019436] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:16:43.751 [2024-12-05 12:00:08.019506] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.751 [2024-12-05 12:00:08.117579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:43.751 [2024-12-05 12:00:08.170513] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.751 [2024-12-05 12:00:08.170567] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.751 [2024-12-05 12:00:08.170575] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.751 [2024-12-05 12:00:08.170583] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.751 [2024-12-05 12:00:08.170589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.751 [2024-12-05 12:00:08.172584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.751 [2024-12-05 12:00:08.172869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.751 [2024-12-05 12:00:08.173030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:43.751 [2024-12-05 12:00:08.173032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 [2024-12-05 12:00:08.882773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 Null1 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 [2024-12-05 12:00:08.962746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 Null2 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 Null3 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.015 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 Null4 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:16:44.278 00:16:44.278 Discovery Log Number of Records 6, Generation counter 6 00:16:44.278 =====Discovery Log Entry 0====== 00:16:44.278 trtype: tcp 00:16:44.278 adrfam: ipv4 00:16:44.278 subtype: current discovery subsystem 00:16:44.278 treq: not required 00:16:44.278 portid: 0 00:16:44.278 trsvcid: 4420 00:16:44.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:44.278 traddr: 10.0.0.2 00:16:44.278 eflags: explicit discovery connections, duplicate discovery information 00:16:44.278 sectype: none 00:16:44.278 =====Discovery Log Entry 1====== 00:16:44.278 trtype: tcp 00:16:44.278 adrfam: ipv4 00:16:44.278 subtype: nvme subsystem 00:16:44.278 treq: not required 00:16:44.278 portid: 0 00:16:44.278 trsvcid: 4420 00:16:44.278 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:44.278 traddr: 10.0.0.2 00:16:44.278 eflags: none 00:16:44.278 sectype: none 00:16:44.278 =====Discovery Log Entry 2====== 00:16:44.278 trtype: tcp 00:16:44.278 adrfam: ipv4 00:16:44.278 subtype: nvme subsystem 00:16:44.278 treq: not required 00:16:44.278 portid: 0 00:16:44.278 trsvcid: 4420 00:16:44.278 subnqn: nqn.2016-06.io.spdk:cnode2 00:16:44.278 traddr: 10.0.0.2 00:16:44.278 eflags: none 00:16:44.278 sectype: none 00:16:44.278 =====Discovery Log Entry 3====== 00:16:44.278 trtype: tcp 00:16:44.278 adrfam: ipv4 00:16:44.278 subtype: nvme subsystem 00:16:44.278 treq: not required 00:16:44.278 portid: 0 00:16:44.278 trsvcid: 4420 00:16:44.278 subnqn: nqn.2016-06.io.spdk:cnode3 00:16:44.278 traddr: 10.0.0.2 00:16:44.278 eflags: none 00:16:44.278 sectype: none 00:16:44.278 =====Discovery Log Entry 4====== 00:16:44.278 trtype: tcp 00:16:44.278 adrfam: ipv4 00:16:44.278 subtype: nvme subsystem 00:16:44.278 treq: not required 00:16:44.278 portid: 0 00:16:44.278 trsvcid: 4420 00:16:44.278 subnqn: nqn.2016-06.io.spdk:cnode4 00:16:44.278 traddr: 10.0.0.2 00:16:44.278 eflags: none 00:16:44.278 sectype: none 00:16:44.278 =====Discovery Log Entry 5====== 00:16:44.278 trtype: tcp 00:16:44.278 adrfam: ipv4 00:16:44.278 subtype: discovery subsystem referral 00:16:44.278 treq: not required 00:16:44.278 portid: 0 00:16:44.278 trsvcid: 4430 00:16:44.278 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:44.278 traddr: 10.0.0.2 00:16:44.278 eflags: none 00:16:44.278 sectype: none 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:16:44.278 Perform nvmf subsystem discovery via RPC 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.278 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.278 [ 00:16:44.278 { 00:16:44.278 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:44.278 "subtype": "Discovery", 00:16:44.278 "listen_addresses": [ 00:16:44.278 { 00:16:44.278 "trtype": "TCP", 00:16:44.278 "adrfam": "IPv4", 00:16:44.278 "traddr": "10.0.0.2", 00:16:44.278 "trsvcid": "4420" 00:16:44.278 } 00:16:44.278 ], 00:16:44.278 "allow_any_host": true, 00:16:44.278 "hosts": [] 00:16:44.278 }, 00:16:44.278 { 00:16:44.278 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:44.278 "subtype": "NVMe", 00:16:44.278 "listen_addresses": [ 00:16:44.278 { 00:16:44.278 "trtype": "TCP", 00:16:44.278 "adrfam": "IPv4", 00:16:44.279 "traddr": "10.0.0.2", 00:16:44.279 "trsvcid": "4420" 00:16:44.279 } 00:16:44.279 ], 00:16:44.279 "allow_any_host": true, 00:16:44.279 "hosts": [], 00:16:44.279 "serial_number": "SPDK00000000000001", 00:16:44.279 "model_number": "SPDK bdev Controller", 00:16:44.279 "max_namespaces": 32, 00:16:44.279 "min_cntlid": 1, 00:16:44.279 "max_cntlid": 65519, 00:16:44.279 "namespaces": [ 00:16:44.279 { 00:16:44.279 "nsid": 1, 00:16:44.279 "bdev_name": "Null1", 00:16:44.279 "name": "Null1", 00:16:44.279 "nguid": "E7E2BB1FF9234DF6B97FE38E84D7B3A2", 00:16:44.279 "uuid": "e7e2bb1f-f923-4df6-b97f-e38e84d7b3a2" 00:16:44.279 } 00:16:44.279 ] 00:16:44.279 }, 00:16:44.279 { 00:16:44.279 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:44.279 "subtype": "NVMe", 00:16:44.279 "listen_addresses": [ 00:16:44.279 { 00:16:44.279 "trtype": "TCP", 00:16:44.279 "adrfam": "IPv4", 00:16:44.279 "traddr": "10.0.0.2", 00:16:44.279 "trsvcid": "4420" 00:16:44.279 } 00:16:44.279 ], 00:16:44.279 "allow_any_host": true, 00:16:44.279 "hosts": [], 00:16:44.279 "serial_number": "SPDK00000000000002", 00:16:44.279 "model_number": "SPDK bdev Controller", 00:16:44.279 "max_namespaces": 32, 00:16:44.279 "min_cntlid": 1, 00:16:44.279 "max_cntlid": 65519, 00:16:44.279 "namespaces": [ 00:16:44.279 { 00:16:44.279 "nsid": 1, 00:16:44.279 "bdev_name": "Null2", 00:16:44.279 "name": "Null2", 00:16:44.279 "nguid": "F0F3C3762B7D4FC99EBCF311F8E29C52", 00:16:44.279 "uuid": "f0f3c376-2b7d-4fc9-9ebc-f311f8e29c52" 00:16:44.279 } 00:16:44.279 ] 00:16:44.279 }, 00:16:44.279 { 00:16:44.279 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:16:44.279 "subtype": "NVMe", 00:16:44.279 "listen_addresses": [ 00:16:44.279 { 00:16:44.279 "trtype": "TCP", 00:16:44.279 "adrfam": "IPv4", 00:16:44.279 "traddr": "10.0.0.2", 00:16:44.279 "trsvcid": "4420" 00:16:44.279 } 00:16:44.279 ], 00:16:44.279 "allow_any_host": true, 00:16:44.279 "hosts": [], 00:16:44.279 "serial_number": "SPDK00000000000003", 00:16:44.279 "model_number": "SPDK bdev Controller", 00:16:44.279 "max_namespaces": 32, 00:16:44.279 "min_cntlid": 1, 00:16:44.279 "max_cntlid": 65519, 00:16:44.279 "namespaces": [ 00:16:44.279 { 00:16:44.279 "nsid": 1, 00:16:44.279 "bdev_name": "Null3", 00:16:44.279 "name": "Null3", 00:16:44.279 "nguid": "9C0E3E32B6414633913BABE1CB4E14AA", 00:16:44.279 "uuid": "9c0e3e32-b641-4633-913b-abe1cb4e14aa" 00:16:44.279 } 00:16:44.279 ] 00:16:44.279 }, 00:16:44.279 { 00:16:44.279 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:16:44.279 "subtype": "NVMe", 00:16:44.279 "listen_addresses": [ 00:16:44.279 { 00:16:44.279 "trtype": "TCP", 00:16:44.279 "adrfam": "IPv4", 00:16:44.279 "traddr": "10.0.0.2", 00:16:44.279 "trsvcid": "4420" 00:16:44.279 } 00:16:44.279 ], 00:16:44.279 "allow_any_host": true, 00:16:44.279 "hosts": [], 00:16:44.279 "serial_number": "SPDK00000000000004", 00:16:44.279 "model_number": "SPDK bdev Controller", 00:16:44.279 "max_namespaces": 32, 00:16:44.279 "min_cntlid": 1, 00:16:44.279 "max_cntlid": 65519, 00:16:44.279 "namespaces": [ 00:16:44.279 { 00:16:44.279 "nsid": 1, 00:16:44.279 "bdev_name": "Null4", 00:16:44.279 "name": "Null4", 00:16:44.279 "nguid": "B9187006AC4446949C1082817A6E7F37", 00:16:44.279 "uuid": "b9187006-ac44-4694-9c10-82817a6e7f37" 00:16:44.279 } 00:16:44.279 ] 00:16:44.279 } 00:16:44.279 ] 00:16:44.279 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.279 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:16:44.279 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:44.279 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:44.279 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.279 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:44.541 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:44.541 rmmod nvme_tcp 00:16:44.541 rmmod nvme_fabrics 00:16:44.541 rmmod nvme_keyring 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 1254678 ']' 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 1254678 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1254678 ']' 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1254678 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:44.542 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254678 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254678' 00:16:44.804 killing process with pid 1254678 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1254678 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1254678 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@254 -- # local dev 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:44.804 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # return 0 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@274 -- # iptr 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-save 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:16:47.349 00:16:47.349 real 0m11.814s 00:16:47.349 user 0m8.747s 00:16:47.349 sys 0m6.248s 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:47.349 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:16:47.349 ************************************ 00:16:47.350 END TEST nvmf_target_discovery 00:16:47.350 ************************************ 00:16:47.350 12:00:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:47.350 12:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:47.350 12:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:47.350 12:00:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:47.350 ************************************ 00:16:47.350 START TEST nvmf_referrals 00:16:47.350 ************************************ 00:16:47.350 12:00:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:16:47.350 * Looking for test storage... 00:16:47.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:47.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.350 --rc genhtml_branch_coverage=1 00:16:47.350 --rc genhtml_function_coverage=1 00:16:47.350 --rc genhtml_legend=1 00:16:47.350 --rc geninfo_all_blocks=1 00:16:47.350 --rc geninfo_unexecuted_blocks=1 00:16:47.350 00:16:47.350 ' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:47.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.350 --rc genhtml_branch_coverage=1 00:16:47.350 --rc genhtml_function_coverage=1 00:16:47.350 --rc genhtml_legend=1 00:16:47.350 --rc geninfo_all_blocks=1 00:16:47.350 --rc geninfo_unexecuted_blocks=1 00:16:47.350 00:16:47.350 ' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:47.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.350 --rc genhtml_branch_coverage=1 00:16:47.350 --rc genhtml_function_coverage=1 00:16:47.350 --rc genhtml_legend=1 00:16:47.350 --rc geninfo_all_blocks=1 00:16:47.350 --rc geninfo_unexecuted_blocks=1 00:16:47.350 00:16:47.350 ' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:47.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:47.350 --rc genhtml_branch_coverage=1 00:16:47.350 --rc genhtml_function_coverage=1 00:16:47.350 --rc genhtml_legend=1 00:16:47.350 --rc geninfo_all_blocks=1 00:16:47.350 --rc geninfo_unexecuted_blocks=1 00:16:47.350 00:16:47.350 ' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.350 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:47.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:16:47.351 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:55.495 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:55.495 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:55.495 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:55.495 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@247 -- # create_target_ns 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:55.495 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:55.496 10.0.0.1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:55.496 10.0.0.2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:55.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:55.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.610 ms 00:16:55.496 00:16:55.496 --- 10.0.0.1 ping statistics --- 00:16:55.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.496 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:55.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:55.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:16:55.496 00:16:55.496 --- 10.0.0.2 ping statistics --- 00:16:55.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:55.496 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:16:55.496 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:16:55.497 ' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=1259865 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 1259865 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1259865 ']' 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:55.497 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.497 [2024-12-05 12:00:19.924975] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:16:55.497 [2024-12-05 12:00:19.925042] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:55.497 [2024-12-05 12:00:20.023502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:55.497 [2024-12-05 12:00:20.080680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:55.497 [2024-12-05 12:00:20.080735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:55.497 [2024-12-05 12:00:20.080749] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:55.497 [2024-12-05 12:00:20.080756] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:55.497 [2024-12-05 12:00:20.080762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:55.497 [2024-12-05 12:00:20.083001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:55.497 [2024-12-05 12:00:20.083161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:55.497 [2024-12-05 12:00:20.083308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.497 [2024-12-05 12:00:20.083309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:55.759 [2024-12-05 12:00:20.793014] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.759 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.021 [2024-12-05 12:00:20.822706] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.021 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:16:56.282 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:16:56.283 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.283 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.283 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.283 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.283 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.544 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:56.544 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:16:56.544 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:56.545 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:56.806 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:16:57.068 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:16:57.068 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:16:57.068 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:57.068 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.068 12:00:21 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:57.068 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:16:57.329 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:16:57.330 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:16:57.330 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:16:57.330 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.330 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:16:57.590 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:57.851 rmmod nvme_tcp 00:16:57.851 rmmod nvme_fabrics 00:16:57.851 rmmod nvme_keyring 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 1259865 ']' 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 1259865 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1259865 ']' 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1259865 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1259865 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1259865' 00:16:57.851 killing process with pid 1259865 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1259865 00:16:57.851 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1259865 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@254 -- # local dev 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:58.113 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # return 0 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:00.025 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@274 -- # iptr 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-save 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:00.026 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-restore 00:17:00.286 00:17:00.286 real 0m13.137s 00:17:00.286 user 0m14.748s 00:17:00.286 sys 0m6.496s 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:17:00.286 ************************************ 00:17:00.286 END TEST nvmf_referrals 00:17:00.286 ************************************ 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:00.286 ************************************ 00:17:00.286 START TEST nvmf_connect_disconnect 00:17:00.286 ************************************ 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:17:00.286 * Looking for test storage... 00:17:00.286 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:17:00.286 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:00.548 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:00.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.549 --rc genhtml_branch_coverage=1 00:17:00.549 --rc genhtml_function_coverage=1 00:17:00.549 --rc genhtml_legend=1 00:17:00.549 --rc geninfo_all_blocks=1 00:17:00.549 --rc geninfo_unexecuted_blocks=1 00:17:00.549 00:17:00.549 ' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:00.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.549 --rc genhtml_branch_coverage=1 00:17:00.549 --rc genhtml_function_coverage=1 00:17:00.549 --rc genhtml_legend=1 00:17:00.549 --rc geninfo_all_blocks=1 00:17:00.549 --rc geninfo_unexecuted_blocks=1 00:17:00.549 00:17:00.549 ' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:00.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.549 --rc genhtml_branch_coverage=1 00:17:00.549 --rc genhtml_function_coverage=1 00:17:00.549 --rc genhtml_legend=1 00:17:00.549 --rc geninfo_all_blocks=1 00:17:00.549 --rc geninfo_unexecuted_blocks=1 00:17:00.549 00:17:00.549 ' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:00.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:00.549 --rc genhtml_branch_coverage=1 00:17:00.549 --rc genhtml_function_coverage=1 00:17:00.549 --rc genhtml_legend=1 00:17:00.549 --rc geninfo_all_blocks=1 00:17:00.549 --rc geninfo_unexecuted_blocks=1 00:17:00.549 00:17:00.549 ' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:00.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:17:00.549 12:00:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:08.688 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:08.688 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:08.688 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:08.689 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:08.689 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:08.689 10.0.0.1 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:08.689 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:08.690 10.0.0.2 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:08.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:08.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.649 ms 00:17:08.690 00:17:08.690 --- 10.0.0.1 ping statistics --- 00:17:08.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.690 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:08.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:08.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:17:08.690 00:17:08.690 --- 10.0.0.2 ping statistics --- 00:17:08.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:08.690 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:17:08.690 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:08.691 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:17:08.691 ' 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=1264669 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 1264669 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1264669 ']' 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.691 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.692 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.692 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:08.692 [2024-12-05 12:00:33.109166] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:17:08.692 [2024-12-05 12:00:33.109235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.692 [2024-12-05 12:00:33.207766] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:08.692 [2024-12-05 12:00:33.260303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.692 [2024-12-05 12:00:33.260374] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.692 [2024-12-05 12:00:33.260383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.692 [2024-12-05 12:00:33.260390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.692 [2024-12-05 12:00:33.260397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.692 [2024-12-05 12:00:33.262791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.692 [2024-12-05 12:00:33.262956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.692 [2024-12-05 12:00:33.263118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:08.692 [2024-12-05 12:00:33.263118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:08.953 [2024-12-05 12:00:33.992640] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.953 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:09.213 [2024-12-05 12:00:34.074864] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:17:09.213 12:00:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:17:13.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:16.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.007 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:24.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.505 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:27.505 rmmod nvme_tcp 00:17:27.505 rmmod nvme_fabrics 00:17:27.505 rmmod nvme_keyring 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 1264669 ']' 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 1264669 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1264669 ']' 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1264669 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:17:27.505 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1264669 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1264669' 00:17:27.506 killing process with pid 1264669 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1264669 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1264669 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@254 -- # local dev 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:27.506 12:00:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # return 0 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@274 -- # iptr 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:17:29.416 00:17:29.416 real 0m29.261s 00:17:29.416 user 1m18.158s 00:17:29.416 sys 0m7.060s 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:29.416 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:17:29.416 ************************************ 00:17:29.416 END TEST nvmf_connect_disconnect 00:17:29.416 ************************************ 00:17:29.676 12:00:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:29.676 12:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:29.676 12:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:29.676 12:00:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:29.676 ************************************ 00:17:29.676 START TEST nvmf_multitarget 00:17:29.676 ************************************ 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:17:29.677 * Looking for test storage... 00:17:29.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.677 --rc genhtml_branch_coverage=1 00:17:29.677 --rc genhtml_function_coverage=1 00:17:29.677 --rc genhtml_legend=1 00:17:29.677 --rc geninfo_all_blocks=1 00:17:29.677 --rc geninfo_unexecuted_blocks=1 00:17:29.677 00:17:29.677 ' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.677 --rc genhtml_branch_coverage=1 00:17:29.677 --rc genhtml_function_coverage=1 00:17:29.677 --rc genhtml_legend=1 00:17:29.677 --rc geninfo_all_blocks=1 00:17:29.677 --rc geninfo_unexecuted_blocks=1 00:17:29.677 00:17:29.677 ' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.677 --rc genhtml_branch_coverage=1 00:17:29.677 --rc genhtml_function_coverage=1 00:17:29.677 --rc genhtml_legend=1 00:17:29.677 --rc geninfo_all_blocks=1 00:17:29.677 --rc geninfo_unexecuted_blocks=1 00:17:29.677 00:17:29.677 ' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:29.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.677 --rc genhtml_branch_coverage=1 00:17:29.677 --rc genhtml_function_coverage=1 00:17:29.677 --rc genhtml_legend=1 00:17:29.677 --rc geninfo_all_blocks=1 00:17:29.677 --rc geninfo_unexecuted_blocks=1 00:17:29.677 00:17:29.677 ' 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.677 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:29.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:17:29.939 12:00:54 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:38.082 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:38.082 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.082 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:38.083 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:38.083 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@247 -- # create_target_ns 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:38.083 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:38.083 10.0.0.1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:38.083 10.0.0.2 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:38.083 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:38.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:38.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.710 ms 00:17:38.084 00:17:38.084 --- 10.0.0.1 ping statistics --- 00:17:38.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.084 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:38.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:38.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:17:38.084 00:17:38.084 --- 10.0.0.2 ping statistics --- 00:17:38.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:38.084 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:38.084 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:17:38.085 ' 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=1272807 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 1272807 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1272807 ']' 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:38.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:38.085 12:01:02 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.085 [2024-12-05 12:01:02.439562] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:17:38.085 [2024-12-05 12:01:02.439628] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:38.085 [2024-12-05 12:01:02.524136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:38.085 [2024-12-05 12:01:02.577451] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:38.085 [2024-12-05 12:01:02.577507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:38.085 [2024-12-05 12:01:02.577516] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:38.085 [2024-12-05 12:01:02.577523] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:38.085 [2024-12-05 12:01:02.577529] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:38.085 [2024-12-05 12:01:02.579511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:38.085 [2024-12-05 12:01:02.579580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.085 [2024-12-05 12:01:02.579749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:38.085 [2024-12-05 12:01:02.579749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:38.347 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:38.608 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:38.609 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:38.609 "nvmf_tgt_1" 00:17:38.609 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:38.609 "nvmf_tgt_2" 00:17:38.869 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:38.869 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:38.869 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:38.869 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:38.869 true 00:17:38.869 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:39.130 true 00:17:39.130 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:39.130 12:01:03 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:39.130 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:39.130 rmmod nvme_tcp 00:17:39.130 rmmod nvme_fabrics 00:17:39.130 rmmod nvme_keyring 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 1272807 ']' 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 1272807 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1272807 ']' 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1272807 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1272807 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1272807' 00:17:39.390 killing process with pid 1272807 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1272807 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1272807 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@254 -- # local dev 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # remove_target_ns 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:39.390 12:01:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # delete_main_bridge 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # return 0 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@274 -- # iptr 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-save 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-restore 00:17:41.935 00:17:41.935 real 0m12.006s 00:17:41.935 user 0m10.559s 00:17:41.935 sys 0m6.150s 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:41.935 ************************************ 00:17:41.935 END TEST nvmf_multitarget 00:17:41.935 ************************************ 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.935 ************************************ 00:17:41.935 START TEST nvmf_rpc 00:17:41.935 ************************************ 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:41.935 * Looking for test storage... 00:17:41.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:41.935 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.936 --rc genhtml_branch_coverage=1 00:17:41.936 --rc genhtml_function_coverage=1 00:17:41.936 --rc genhtml_legend=1 00:17:41.936 --rc geninfo_all_blocks=1 00:17:41.936 --rc geninfo_unexecuted_blocks=1 00:17:41.936 00:17:41.936 ' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.936 --rc genhtml_branch_coverage=1 00:17:41.936 --rc genhtml_function_coverage=1 00:17:41.936 --rc genhtml_legend=1 00:17:41.936 --rc geninfo_all_blocks=1 00:17:41.936 --rc geninfo_unexecuted_blocks=1 00:17:41.936 00:17:41.936 ' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.936 --rc genhtml_branch_coverage=1 00:17:41.936 --rc genhtml_function_coverage=1 00:17:41.936 --rc genhtml_legend=1 00:17:41.936 --rc geninfo_all_blocks=1 00:17:41.936 --rc geninfo_unexecuted_blocks=1 00:17:41.936 00:17:41.936 ' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.936 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.936 --rc genhtml_branch_coverage=1 00:17:41.936 --rc genhtml_function_coverage=1 00:17:41.936 --rc genhtml_legend=1 00:17:41.936 --rc geninfo_all_blocks=1 00:17:41.936 --rc geninfo_unexecuted_blocks=1 00:17:41.936 00:17:41.936 ' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:41.936 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:17:41.936 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:50.231 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:50.231 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:50.231 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:50.231 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@247 -- # create_target_ns 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:50.231 10.0.0.1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:50.231 10.0.0.2 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:50.231 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:50.232 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:50.232 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.695 ms 00:17:50.232 00:17:50.232 --- 10.0.0.1 ping statistics --- 00:17:50.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.232 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:17:50.232 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:50.232 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:17:50.232 00:17:50.232 --- 10.0.0.2 ping statistics --- 00:17:50.232 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:50.232 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:17:50.232 ' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=1277532 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 1277532 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1277532 ']' 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:50.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:50.232 12:01:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.232 [2024-12-05 12:01:14.620615] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:17:50.232 [2024-12-05 12:01:14.620681] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:50.232 [2024-12-05 12:01:14.722740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:50.232 [2024-12-05 12:01:14.776406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:50.232 [2024-12-05 12:01:14.776472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:50.232 [2024-12-05 12:01:14.776481] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:50.232 [2024-12-05 12:01:14.776488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:50.232 [2024-12-05 12:01:14.776495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:50.232 [2024-12-05 12:01:14.778887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:50.232 [2024-12-05 12:01:14.779044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:50.232 [2024-12-05 12:01:14.779204] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:50.232 [2024-12-05 12:01:14.779205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:50.492 "tick_rate": 2400000000, 00:17:50.492 "poll_groups": [ 00:17:50.492 { 00:17:50.492 "name": "nvmf_tgt_poll_group_000", 00:17:50.492 "admin_qpairs": 0, 00:17:50.492 "io_qpairs": 0, 00:17:50.492 "current_admin_qpairs": 0, 00:17:50.492 "current_io_qpairs": 0, 00:17:50.492 "pending_bdev_io": 0, 00:17:50.492 "completed_nvme_io": 0, 00:17:50.492 "transports": [] 00:17:50.492 }, 00:17:50.492 { 00:17:50.492 "name": "nvmf_tgt_poll_group_001", 00:17:50.492 "admin_qpairs": 0, 00:17:50.492 "io_qpairs": 0, 00:17:50.492 "current_admin_qpairs": 0, 00:17:50.492 "current_io_qpairs": 0, 00:17:50.492 "pending_bdev_io": 0, 00:17:50.492 "completed_nvme_io": 0, 00:17:50.492 "transports": [] 00:17:50.492 }, 00:17:50.492 { 00:17:50.492 "name": "nvmf_tgt_poll_group_002", 00:17:50.492 "admin_qpairs": 0, 00:17:50.492 "io_qpairs": 0, 00:17:50.492 "current_admin_qpairs": 0, 00:17:50.492 "current_io_qpairs": 0, 00:17:50.492 "pending_bdev_io": 0, 00:17:50.492 "completed_nvme_io": 0, 00:17:50.492 "transports": [] 00:17:50.492 }, 00:17:50.492 { 00:17:50.492 "name": "nvmf_tgt_poll_group_003", 00:17:50.492 "admin_qpairs": 0, 00:17:50.492 "io_qpairs": 0, 00:17:50.492 "current_admin_qpairs": 0, 00:17:50.492 "current_io_qpairs": 0, 00:17:50.492 "pending_bdev_io": 0, 00:17:50.492 "completed_nvme_io": 0, 00:17:50.492 "transports": [] 00:17:50.492 } 00:17:50.492 ] 00:17:50.492 }' 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:50.492 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.752 [2024-12-05 12:01:15.617917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:50.752 "tick_rate": 2400000000, 00:17:50.752 "poll_groups": [ 00:17:50.752 { 00:17:50.752 "name": "nvmf_tgt_poll_group_000", 00:17:50.752 "admin_qpairs": 0, 00:17:50.752 "io_qpairs": 0, 00:17:50.752 "current_admin_qpairs": 0, 00:17:50.752 "current_io_qpairs": 0, 00:17:50.752 "pending_bdev_io": 0, 00:17:50.752 "completed_nvme_io": 0, 00:17:50.752 "transports": [ 00:17:50.752 { 00:17:50.752 "trtype": "TCP" 00:17:50.752 } 00:17:50.752 ] 00:17:50.752 }, 00:17:50.752 { 00:17:50.752 "name": "nvmf_tgt_poll_group_001", 00:17:50.752 "admin_qpairs": 0, 00:17:50.752 "io_qpairs": 0, 00:17:50.752 "current_admin_qpairs": 0, 00:17:50.752 "current_io_qpairs": 0, 00:17:50.752 "pending_bdev_io": 0, 00:17:50.752 "completed_nvme_io": 0, 00:17:50.752 "transports": [ 00:17:50.752 { 00:17:50.752 "trtype": "TCP" 00:17:50.752 } 00:17:50.752 ] 00:17:50.752 }, 00:17:50.752 { 00:17:50.752 "name": "nvmf_tgt_poll_group_002", 00:17:50.752 "admin_qpairs": 0, 00:17:50.752 "io_qpairs": 0, 00:17:50.752 "current_admin_qpairs": 0, 00:17:50.752 "current_io_qpairs": 0, 00:17:50.752 "pending_bdev_io": 0, 00:17:50.752 "completed_nvme_io": 0, 00:17:50.752 "transports": [ 00:17:50.752 { 00:17:50.752 "trtype": "TCP" 00:17:50.752 } 00:17:50.752 ] 00:17:50.752 }, 00:17:50.752 { 00:17:50.752 "name": "nvmf_tgt_poll_group_003", 00:17:50.752 "admin_qpairs": 0, 00:17:50.752 "io_qpairs": 0, 00:17:50.752 "current_admin_qpairs": 0, 00:17:50.752 "current_io_qpairs": 0, 00:17:50.752 "pending_bdev_io": 0, 00:17:50.752 "completed_nvme_io": 0, 00:17:50.752 "transports": [ 00:17:50.752 { 00:17:50.752 "trtype": "TCP" 00:17:50.752 } 00:17:50.752 ] 00:17:50.752 } 00:17:50.752 ] 00:17:50.752 }' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:50.752 Malloc1 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.752 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.012 [2024-12-05 12:01:15.834602] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:51.012 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:17:51.013 [2024-12-05 12:01:15.871551] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:51.013 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:51.013 could not add new controller: failed to write to nvme-fabrics device 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.013 12:01:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:52.396 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:52.396 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:52.396 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:52.396 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:52.396 12:01:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.940 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:54.940 [2024-12-05 12:01:19.608767] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:17:54.940 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:54.940 could not add new controller: failed to write to nvme-fabrics device 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:54.940 12:01:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:56.326 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:56.326 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:56.326 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:56.326 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:56.326 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:58.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.239 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.500 [2024-12-05 12:01:23.311461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.500 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:59.887 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.887 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:59.887 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.887 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:59.887 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:02.430 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.431 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.431 [2024-12-05 12:01:27.034382] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.431 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.817 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:03.817 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.817 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.817 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:03.817 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:05.732 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 [2024-12-05 12:01:30.714494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.732 12:01:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:07.647 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:07.647 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:07.647 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:07.647 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:07.647 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 [2024-12-05 12:01:34.421746] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.560 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:10.942 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:10.942 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:10.942 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.942 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:10.942 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:12.855 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:12.855 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:12.855 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.115 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:13.115 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.115 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:13.115 12:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:13.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.115 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.116 [2024-12-05 12:01:38.083594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.116 12:01:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:15.029 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:18:15.029 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:18:15.029 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:15.029 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:15.029 12:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:16.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.942 [2024-12-05 12:01:41.813591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.942 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 [2024-12-05 12:01:41.877719] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 [2024-12-05 12:01:41.945873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.943 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:17.206 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:17.206 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 [2024-12-05 12:01:42.018107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 [2024-12-05 12:01:42.086341] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:18:17.206 "tick_rate": 2400000000, 00:18:17.206 "poll_groups": [ 00:18:17.206 { 00:18:17.206 "name": "nvmf_tgt_poll_group_000", 00:18:17.206 "admin_qpairs": 0, 00:18:17.206 "io_qpairs": 224, 00:18:17.206 "current_admin_qpairs": 0, 00:18:17.206 "current_io_qpairs": 0, 00:18:17.206 "pending_bdev_io": 0, 00:18:17.206 "completed_nvme_io": 274, 00:18:17.206 "transports": [ 00:18:17.206 { 00:18:17.206 "trtype": "TCP" 00:18:17.206 } 00:18:17.206 ] 00:18:17.206 }, 00:18:17.206 { 00:18:17.206 "name": "nvmf_tgt_poll_group_001", 00:18:17.206 "admin_qpairs": 1, 00:18:17.206 "io_qpairs": 223, 00:18:17.206 "current_admin_qpairs": 0, 00:18:17.206 "current_io_qpairs": 0, 00:18:17.206 "pending_bdev_io": 0, 00:18:17.206 "completed_nvme_io": 451, 00:18:17.206 "transports": [ 00:18:17.206 { 00:18:17.206 "trtype": "TCP" 00:18:17.206 } 00:18:17.206 ] 00:18:17.206 }, 00:18:17.206 { 00:18:17.206 "name": "nvmf_tgt_poll_group_002", 00:18:17.206 "admin_qpairs": 6, 00:18:17.206 "io_qpairs": 218, 00:18:17.206 "current_admin_qpairs": 0, 00:18:17.206 "current_io_qpairs": 0, 00:18:17.206 "pending_bdev_io": 0, 00:18:17.206 "completed_nvme_io": 222, 00:18:17.206 "transports": [ 00:18:17.206 { 00:18:17.206 "trtype": "TCP" 00:18:17.206 } 00:18:17.206 ] 00:18:17.206 }, 00:18:17.206 { 00:18:17.206 "name": "nvmf_tgt_poll_group_003", 00:18:17.206 "admin_qpairs": 0, 00:18:17.206 "io_qpairs": 224, 00:18:17.206 "current_admin_qpairs": 0, 00:18:17.206 "current_io_qpairs": 0, 00:18:17.206 "pending_bdev_io": 0, 00:18:17.206 "completed_nvme_io": 292, 00:18:17.206 "transports": [ 00:18:17.206 { 00:18:17.206 "trtype": "TCP" 00:18:17.206 } 00:18:17.206 ] 00:18:17.206 } 00:18:17.206 ] 00:18:17.206 }' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:18:17.206 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:17.467 rmmod nvme_tcp 00:18:17.467 rmmod nvme_fabrics 00:18:17.467 rmmod nvme_keyring 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 1277532 ']' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 1277532 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1277532 ']' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1277532 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1277532 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1277532' 00:18:17.467 killing process with pid 1277532 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1277532 00:18:17.467 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1277532 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@254 -- # local dev 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:17.727 12:01:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # return 0 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@274 -- # iptr 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-save 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-restore 00:18:19.640 00:18:19.640 real 0m38.019s 00:18:19.640 user 1m53.148s 00:18:19.640 sys 0m7.678s 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:18:19.640 ************************************ 00:18:19.640 END TEST nvmf_rpc 00:18:19.640 ************************************ 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:19.640 12:01:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:19.901 ************************************ 00:18:19.901 START TEST nvmf_invalid 00:18:19.901 ************************************ 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:18:19.901 * Looking for test storage... 00:18:19.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:19.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.901 --rc genhtml_branch_coverage=1 00:18:19.901 --rc genhtml_function_coverage=1 00:18:19.901 --rc genhtml_legend=1 00:18:19.901 --rc geninfo_all_blocks=1 00:18:19.901 --rc geninfo_unexecuted_blocks=1 00:18:19.901 00:18:19.901 ' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:19.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.901 --rc genhtml_branch_coverage=1 00:18:19.901 --rc genhtml_function_coverage=1 00:18:19.901 --rc genhtml_legend=1 00:18:19.901 --rc geninfo_all_blocks=1 00:18:19.901 --rc geninfo_unexecuted_blocks=1 00:18:19.901 00:18:19.901 ' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:19.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.901 --rc genhtml_branch_coverage=1 00:18:19.901 --rc genhtml_function_coverage=1 00:18:19.901 --rc genhtml_legend=1 00:18:19.901 --rc geninfo_all_blocks=1 00:18:19.901 --rc geninfo_unexecuted_blocks=1 00:18:19.901 00:18:19.901 ' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:19.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:19.901 --rc genhtml_branch_coverage=1 00:18:19.901 --rc genhtml_function_coverage=1 00:18:19.901 --rc genhtml_legend=1 00:18:19.901 --rc geninfo_all_blocks=1 00:18:19.901 --rc geninfo_unexecuted_blocks=1 00:18:19.901 00:18:19.901 ' 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.901 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:19.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:18:19.902 12:01:44 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.064 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:28.065 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:28.065 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:28.065 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:28.065 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@247 -- # create_target_ns 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:28.065 10.0.0.1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:28.065 10.0.0.2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:28.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.692 ms 00:18:28.065 00:18:28.065 --- 10.0.0.1 ping statistics --- 00:18:28.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.065 rtt min/avg/max/mdev = 0.692/0.692/0.692/0.000 ms 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:28.065 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:28.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:18:28.065 00:18:28.066 --- 10.0.0.2 ping statistics --- 00:18:28.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.066 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:18:28.066 ' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=1287284 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 1287284 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1287284 ']' 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.066 12:01:52 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 [2024-12-05 12:01:52.631236] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:18:28.066 [2024-12-05 12:01:52.631303] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.066 [2024-12-05 12:01:52.732149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.066 [2024-12-05 12:01:52.785209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.066 [2024-12-05 12:01:52.785266] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.066 [2024-12-05 12:01:52.785275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.066 [2024-12-05 12:01:52.785282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.066 [2024-12-05 12:01:52.785289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.066 [2024-12-05 12:01:52.787741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.066 [2024-12-05 12:01:52.787906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.066 [2024-12-05 12:01:52.788072] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.066 [2024-12-05 12:01:52.788073] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:28.637 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32131 00:18:28.637 [2024-12-05 12:01:53.670463] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:18:28.897 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:18:28.897 { 00:18:28.897 "nqn": "nqn.2016-06.io.spdk:cnode32131", 00:18:28.897 "tgt_name": "foobar", 00:18:28.897 "method": "nvmf_create_subsystem", 00:18:28.897 "req_id": 1 00:18:28.897 } 00:18:28.897 Got JSON-RPC error response 00:18:28.897 response: 00:18:28.897 { 00:18:28.897 "code": -32603, 00:18:28.897 "message": "Unable to find target foobar" 00:18:28.897 }' 00:18:28.897 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:18:28.897 { 00:18:28.897 "nqn": "nqn.2016-06.io.spdk:cnode32131", 00:18:28.897 "tgt_name": "foobar", 00:18:28.897 "method": "nvmf_create_subsystem", 00:18:28.897 "req_id": 1 00:18:28.897 } 00:18:28.897 Got JSON-RPC error response 00:18:28.897 response: 00:18:28.897 { 00:18:28.897 "code": -32603, 00:18:28.897 "message": "Unable to find target foobar" 00:18:28.897 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:18:28.897 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:18:28.897 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode31519 00:18:28.897 [2024-12-05 12:01:53.879353] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31519: invalid serial number 'SPDKISFASTANDAWESOME' 00:18:28.897 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:18:28.897 { 00:18:28.897 "nqn": "nqn.2016-06.io.spdk:cnode31519", 00:18:28.897 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:28.897 "method": "nvmf_create_subsystem", 00:18:28.897 "req_id": 1 00:18:28.897 } 00:18:28.897 Got JSON-RPC error response 00:18:28.897 response: 00:18:28.897 { 00:18:28.897 "code": -32602, 00:18:28.898 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:28.898 }' 00:18:28.898 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:18:28.898 { 00:18:28.898 "nqn": "nqn.2016-06.io.spdk:cnode31519", 00:18:28.898 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:18:28.898 "method": "nvmf_create_subsystem", 00:18:28.898 "req_id": 1 00:18:28.898 } 00:18:28.898 Got JSON-RPC error response 00:18:28.898 response: 00:18:28.898 { 00:18:28.898 "code": -32602, 00:18:28.898 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:18:28.898 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:28.898 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:18:28.898 12:01:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode23410 00:18:29.159 [2024-12-05 12:01:54.084060] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23410: invalid model number 'SPDK_Controller' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:18:29.159 { 00:18:29.159 "nqn": "nqn.2016-06.io.spdk:cnode23410", 00:18:29.159 "model_number": "SPDK_Controller\u001f", 00:18:29.159 "method": "nvmf_create_subsystem", 00:18:29.159 "req_id": 1 00:18:29.159 } 00:18:29.159 Got JSON-RPC error response 00:18:29.159 response: 00:18:29.159 { 00:18:29.159 "code": -32602, 00:18:29.159 "message": "Invalid MN SPDK_Controller\u001f" 00:18:29.159 }' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:18:29.159 { 00:18:29.159 "nqn": "nqn.2016-06.io.spdk:cnode23410", 00:18:29.159 "model_number": "SPDK_Controller\u001f", 00:18:29.159 "method": "nvmf_create_subsystem", 00:18:29.159 "req_id": 1 00:18:29.159 } 00:18:29.159 Got JSON-RPC error response 00:18:29.159 response: 00:18:29.159 { 00:18:29.159 "code": -32602, 00:18:29.159 "message": "Invalid MN SPDK_Controller\u001f" 00:18:29.159 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.159 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:18:29.160 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.421 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ i == \- ]] 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'izAn0SE|ca'\''y_v#A7=;{U' 00:18:29.422 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'izAn0SE|ca'\''y_v#A7=;{U' nqn.2016-06.io.spdk:cnode20010 00:18:29.422 [2024-12-05 12:01:54.465440] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20010: invalid serial number 'izAn0SE|ca'y_v#A7=;{U' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:18:29.684 { 00:18:29.684 "nqn": "nqn.2016-06.io.spdk:cnode20010", 00:18:29.684 "serial_number": "izAn0SE|ca'\''y_v#A7=;{U", 00:18:29.684 "method": "nvmf_create_subsystem", 00:18:29.684 "req_id": 1 00:18:29.684 } 00:18:29.684 Got JSON-RPC error response 00:18:29.684 response: 00:18:29.684 { 00:18:29.684 "code": -32602, 00:18:29.684 "message": "Invalid SN izAn0SE|ca'\''y_v#A7=;{U" 00:18:29.684 }' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:18:29.684 { 00:18:29.684 "nqn": "nqn.2016-06.io.spdk:cnode20010", 00:18:29.684 "serial_number": "izAn0SE|ca'y_v#A7=;{U", 00:18:29.684 "method": "nvmf_create_subsystem", 00:18:29.684 "req_id": 1 00:18:29.684 } 00:18:29.684 Got JSON-RPC error response 00:18:29.684 response: 00:18:29.684 { 00:18:29.684 "code": -32602, 00:18:29.684 "message": "Invalid SN izAn0SE|ca'y_v#A7=;{U" 00:18:29.684 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:18:29.684 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.685 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:18:29.945 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ i == \- ]] 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'iOnODoXx21Ib=X'\''GTvAtsdBwsyO'\''rB}DH2.%ex!Dn' 00:18:29.946 12:01:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'iOnODoXx21Ib=X'\''GTvAtsdBwsyO'\''rB}DH2.%ex!Dn' nqn.2016-06.io.spdk:cnode19812 00:18:30.207 [2024-12-05 12:01:55.015472] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19812: invalid model number 'iOnODoXx21Ib=X'GTvAtsdBwsyO'rB}DH2.%ex!Dn' 00:18:30.207 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:18:30.207 { 00:18:30.207 "nqn": "nqn.2016-06.io.spdk:cnode19812", 00:18:30.207 "model_number": "iOnODoXx21Ib=X'\''GTvAtsdBwsyO'\''rB}DH2.%ex!Dn", 00:18:30.207 "method": "nvmf_create_subsystem", 00:18:30.207 "req_id": 1 00:18:30.207 } 00:18:30.207 Got JSON-RPC error response 00:18:30.207 response: 00:18:30.207 { 00:18:30.207 "code": -32602, 00:18:30.207 "message": "Invalid MN iOnODoXx21Ib=X'\''GTvAtsdBwsyO'\''rB}DH2.%ex!Dn" 00:18:30.207 }' 00:18:30.207 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:18:30.207 { 00:18:30.207 "nqn": "nqn.2016-06.io.spdk:cnode19812", 00:18:30.207 "model_number": "iOnODoXx21Ib=X'GTvAtsdBwsyO'rB}DH2.%ex!Dn", 00:18:30.207 "method": "nvmf_create_subsystem", 00:18:30.207 "req_id": 1 00:18:30.207 } 00:18:30.207 Got JSON-RPC error response 00:18:30.207 response: 00:18:30.207 { 00:18:30.207 "code": -32602, 00:18:30.207 "message": "Invalid MN iOnODoXx21Ib=X'GTvAtsdBwsyO'rB}DH2.%ex!Dn" 00:18:30.207 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:18:30.207 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:18:30.207 [2024-12-05 12:01:55.208191] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:30.207 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:18:30.480 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:18:30.480 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '10.0.0.2 00:18:30.480 ' 00:18:30.480 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:18:30.480 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP=10.0.0.2 00:18:30.480 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a 10.0.0.2 -s 4421 00:18:30.740 [2024-12-05 12:01:55.593372] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:18:30.740 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:18:30.740 { 00:18:30.740 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:30.740 "listen_address": { 00:18:30.740 "trtype": "tcp", 00:18:30.740 "traddr": "10.0.0.2", 00:18:30.740 "trsvcid": "4421" 00:18:30.740 }, 00:18:30.740 "method": "nvmf_subsystem_remove_listener", 00:18:30.740 "req_id": 1 00:18:30.740 } 00:18:30.740 Got JSON-RPC error response 00:18:30.740 response: 00:18:30.740 { 00:18:30.740 "code": -32602, 00:18:30.740 "message": "Invalid parameters" 00:18:30.740 }' 00:18:30.740 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:18:30.740 { 00:18:30.740 "nqn": "nqn.2016-06.io.spdk:cnode", 00:18:30.740 "listen_address": { 00:18:30.740 "trtype": "tcp", 00:18:30.740 "traddr": "10.0.0.2", 00:18:30.740 "trsvcid": "4421" 00:18:30.740 }, 00:18:30.740 "method": "nvmf_subsystem_remove_listener", 00:18:30.740 "req_id": 1 00:18:30.740 } 00:18:30.740 Got JSON-RPC error response 00:18:30.740 response: 00:18:30.740 { 00:18:30.740 "code": -32602, 00:18:30.740 "message": "Invalid parameters" 00:18:30.740 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:18:30.740 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29897 -i 0 00:18:30.740 [2024-12-05 12:01:55.781917] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29897: invalid cntlid range [0-65519] 00:18:31.002 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:18:31.002 { 00:18:31.002 "nqn": "nqn.2016-06.io.spdk:cnode29897", 00:18:31.002 "min_cntlid": 0, 00:18:31.002 "method": "nvmf_create_subsystem", 00:18:31.002 "req_id": 1 00:18:31.002 } 00:18:31.002 Got JSON-RPC error response 00:18:31.002 response: 00:18:31.002 { 00:18:31.002 "code": -32602, 00:18:31.002 "message": "Invalid cntlid range [0-65519]" 00:18:31.002 }' 00:18:31.002 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:18:31.002 { 00:18:31.002 "nqn": "nqn.2016-06.io.spdk:cnode29897", 00:18:31.002 "min_cntlid": 0, 00:18:31.002 "method": "nvmf_create_subsystem", 00:18:31.002 "req_id": 1 00:18:31.002 } 00:18:31.002 Got JSON-RPC error response 00:18:31.002 response: 00:18:31.002 { 00:18:31.002 "code": -32602, 00:18:31.002 "message": "Invalid cntlid range [0-65519]" 00:18:31.002 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.002 12:01:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23585 -i 65520 00:18:31.002 [2024-12-05 12:01:55.970545] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23585: invalid cntlid range [65520-65519] 00:18:31.002 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:18:31.002 { 00:18:31.002 "nqn": "nqn.2016-06.io.spdk:cnode23585", 00:18:31.002 "min_cntlid": 65520, 00:18:31.002 "method": "nvmf_create_subsystem", 00:18:31.002 "req_id": 1 00:18:31.002 } 00:18:31.002 Got JSON-RPC error response 00:18:31.002 response: 00:18:31.002 { 00:18:31.002 "code": -32602, 00:18:31.002 "message": "Invalid cntlid range [65520-65519]" 00:18:31.002 }' 00:18:31.002 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:18:31.002 { 00:18:31.002 "nqn": "nqn.2016-06.io.spdk:cnode23585", 00:18:31.002 "min_cntlid": 65520, 00:18:31.002 "method": "nvmf_create_subsystem", 00:18:31.002 "req_id": 1 00:18:31.002 } 00:18:31.002 Got JSON-RPC error response 00:18:31.002 response: 00:18:31.002 { 00:18:31.002 "code": -32602, 00:18:31.002 "message": "Invalid cntlid range [65520-65519]" 00:18:31.002 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.002 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode32277 -I 0 00:18:31.264 [2024-12-05 12:01:56.159170] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32277: invalid cntlid range [1-0] 00:18:31.264 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:18:31.264 { 00:18:31.264 "nqn": "nqn.2016-06.io.spdk:cnode32277", 00:18:31.264 "max_cntlid": 0, 00:18:31.264 "method": "nvmf_create_subsystem", 00:18:31.264 "req_id": 1 00:18:31.264 } 00:18:31.264 Got JSON-RPC error response 00:18:31.264 response: 00:18:31.264 { 00:18:31.264 "code": -32602, 00:18:31.264 "message": "Invalid cntlid range [1-0]" 00:18:31.264 }' 00:18:31.264 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:18:31.264 { 00:18:31.264 "nqn": "nqn.2016-06.io.spdk:cnode32277", 00:18:31.264 "max_cntlid": 0, 00:18:31.264 "method": "nvmf_create_subsystem", 00:18:31.264 "req_id": 1 00:18:31.264 } 00:18:31.264 Got JSON-RPC error response 00:18:31.264 response: 00:18:31.264 { 00:18:31.264 "code": -32602, 00:18:31.264 "message": "Invalid cntlid range [1-0]" 00:18:31.264 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.264 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15030 -I 65520 00:18:31.524 [2024-12-05 12:01:56.347762] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15030: invalid cntlid range [1-65520] 00:18:31.524 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:18:31.524 { 00:18:31.524 "nqn": "nqn.2016-06.io.spdk:cnode15030", 00:18:31.524 "max_cntlid": 65520, 00:18:31.524 "method": "nvmf_create_subsystem", 00:18:31.524 "req_id": 1 00:18:31.524 } 00:18:31.524 Got JSON-RPC error response 00:18:31.524 response: 00:18:31.524 { 00:18:31.524 "code": -32602, 00:18:31.524 "message": "Invalid cntlid range [1-65520]" 00:18:31.524 }' 00:18:31.524 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:18:31.524 { 00:18:31.524 "nqn": "nqn.2016-06.io.spdk:cnode15030", 00:18:31.524 "max_cntlid": 65520, 00:18:31.524 "method": "nvmf_create_subsystem", 00:18:31.524 "req_id": 1 00:18:31.524 } 00:18:31.524 Got JSON-RPC error response 00:18:31.524 response: 00:18:31.524 { 00:18:31.524 "code": -32602, 00:18:31.524 "message": "Invalid cntlid range [1-65520]" 00:18:31.524 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.524 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25748 -i 6 -I 5 00:18:31.524 [2024-12-05 12:01:56.536381] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25748: invalid cntlid range [6-5] 00:18:31.524 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:18:31.524 { 00:18:31.524 "nqn": "nqn.2016-06.io.spdk:cnode25748", 00:18:31.524 "min_cntlid": 6, 00:18:31.524 "max_cntlid": 5, 00:18:31.524 "method": "nvmf_create_subsystem", 00:18:31.524 "req_id": 1 00:18:31.524 } 00:18:31.524 Got JSON-RPC error response 00:18:31.524 response: 00:18:31.524 { 00:18:31.524 "code": -32602, 00:18:31.524 "message": "Invalid cntlid range [6-5]" 00:18:31.524 }' 00:18:31.524 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:18:31.524 { 00:18:31.524 "nqn": "nqn.2016-06.io.spdk:cnode25748", 00:18:31.524 "min_cntlid": 6, 00:18:31.525 "max_cntlid": 5, 00:18:31.525 "method": "nvmf_create_subsystem", 00:18:31.525 "req_id": 1 00:18:31.525 } 00:18:31.525 Got JSON-RPC error response 00:18:31.525 response: 00:18:31.525 { 00:18:31.525 "code": -32602, 00:18:31.525 "message": "Invalid cntlid range [6-5]" 00:18:31.525 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:18:31.525 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:18:31.786 { 00:18:31.786 "name": "foobar", 00:18:31.786 "method": "nvmf_delete_target", 00:18:31.786 "req_id": 1 00:18:31.786 } 00:18:31.786 Got JSON-RPC error response 00:18:31.786 response: 00:18:31.786 { 00:18:31.786 "code": -32602, 00:18:31.786 "message": "The specified target doesn'\''t exist, cannot delete it." 00:18:31.786 }' 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:18:31.786 { 00:18:31.786 "name": "foobar", 00:18:31.786 "method": "nvmf_delete_target", 00:18:31.786 "req_id": 1 00:18:31.786 } 00:18:31.786 Got JSON-RPC error response 00:18:31.786 response: 00:18:31.786 { 00:18:31.786 "code": -32602, 00:18:31.786 "message": "The specified target doesn't exist, cannot delete it." 00:18:31.786 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:31.786 rmmod nvme_tcp 00:18:31.786 rmmod nvme_fabrics 00:18:31.786 rmmod nvme_keyring 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 1287284 ']' 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 1287284 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 1287284 ']' 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 1287284 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1287284 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1287284' 00:18:31.786 killing process with pid 1287284 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 1287284 00:18:31.786 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 1287284 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@254 -- # local dev 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:32.047 12:01:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # return 0 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:33.959 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@274 -- # iptr 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-save 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:33.960 12:01:58 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-restore 00:18:33.960 00:18:33.960 real 0m14.300s 00:18:33.960 user 0m21.215s 00:18:33.960 sys 0m6.811s 00:18:33.960 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:33.960 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:18:33.960 ************************************ 00:18:33.960 END TEST nvmf_invalid 00:18:33.960 ************************************ 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.221 ************************************ 00:18:34.221 START TEST nvmf_connect_stress 00:18:34.221 ************************************ 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:18:34.221 * Looking for test storage... 00:18:34.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.221 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:34.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.509 --rc genhtml_branch_coverage=1 00:18:34.509 --rc genhtml_function_coverage=1 00:18:34.509 --rc genhtml_legend=1 00:18:34.509 --rc geninfo_all_blocks=1 00:18:34.509 --rc geninfo_unexecuted_blocks=1 00:18:34.509 00:18:34.509 ' 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:34.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.509 --rc genhtml_branch_coverage=1 00:18:34.509 --rc genhtml_function_coverage=1 00:18:34.509 --rc genhtml_legend=1 00:18:34.509 --rc geninfo_all_blocks=1 00:18:34.509 --rc geninfo_unexecuted_blocks=1 00:18:34.509 00:18:34.509 ' 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:34.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.509 --rc genhtml_branch_coverage=1 00:18:34.509 --rc genhtml_function_coverage=1 00:18:34.509 --rc genhtml_legend=1 00:18:34.509 --rc geninfo_all_blocks=1 00:18:34.509 --rc geninfo_unexecuted_blocks=1 00:18:34.509 00:18:34.509 ' 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:34.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.509 --rc genhtml_branch_coverage=1 00:18:34.509 --rc genhtml_function_coverage=1 00:18:34.509 --rc genhtml_legend=1 00:18:34.509 --rc geninfo_all_blocks=1 00:18:34.509 --rc geninfo_unexecuted_blocks=1 00:18:34.509 00:18:34.509 ' 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:34.509 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:34.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:18:34.510 12:01:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.651 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:42.651 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:18:42.651 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:42.651 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:42.651 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:42.651 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:42.652 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:42.652 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:42.652 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:42.652 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:42.652 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:42.653 10.0.0.1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:42.653 10.0.0.2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:42.653 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.653 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.654 ms 00:18:42.653 00:18:42.653 --- 10.0.0.1 ping statistics --- 00:18:42.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.653 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:42.653 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.653 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:18:42.653 00:18:42.653 --- 10.0.0.2 ping statistics --- 00:18:42.653 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.653 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:42.653 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:18:42.654 ' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=1292601 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 1292601 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1292601 ']' 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:42.654 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.654 [2024-12-05 12:02:07.015984] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:18:42.654 [2024-12-05 12:02:07.016052] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.654 [2024-12-05 12:02:07.116283] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:42.654 [2024-12-05 12:02:07.167479] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.654 [2024-12-05 12:02:07.167527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.654 [2024-12-05 12:02:07.167537] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:42.654 [2024-12-05 12:02:07.167545] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:42.654 [2024-12-05 12:02:07.167557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.654 [2024-12-05 12:02:07.169701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.654 [2024-12-05 12:02:07.169868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.654 [2024-12-05 12:02:07.169868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.915 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.916 [2024-12-05 12:02:07.891308] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.916 [2024-12-05 12:02:07.916948] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:42.916 NULL1 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1292642 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:42.916 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.177 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.439 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.439 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:43.439 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.439 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.439 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:43.701 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.701 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:43.701 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:43.701 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.701 12:02:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.272 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.272 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:44.272 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.272 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.272 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.537 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.537 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:44.537 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.537 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.537 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:44.803 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:44.803 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:44.803 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:44.803 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:44.803 12:02:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.064 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.064 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:45.064 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.064 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.064 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.326 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.326 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:45.326 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.326 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.326 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.964 12:02:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.280 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.280 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:46.280 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.280 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.280 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:46.891 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.891 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:46.891 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:46.891 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.891 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.151 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.151 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:47.151 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.151 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.151 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:47.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.674 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.674 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:47.674 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.674 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.674 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:47.935 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.935 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:47.935 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:47.935 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.935 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.505 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.505 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:48.505 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.505 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.505 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:48.765 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.765 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:48.765 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:48.765 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.765 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.089 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.089 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:49.089 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.089 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.089 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.349 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.349 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:49.349 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.349 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.349 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.609 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.609 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:49.609 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.609 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.609 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:49.868 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.868 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:49.868 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:49.868 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.868 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.438 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.438 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:50.438 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.438 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.438 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.697 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.697 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:50.697 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.697 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.697 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:50.956 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.956 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:50.956 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:50.956 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.956 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.216 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.216 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:51.216 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.216 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.216 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:51.785 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.786 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:51.786 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:51.786 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.786 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.046 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.046 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:52.046 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.046 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.046 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.307 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.307 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:52.307 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.307 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.307 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.567 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.567 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:52.567 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.567 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.567 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:52.828 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.828 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:52.828 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:18:52.828 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.828 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:53.089 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1292642 00:18:53.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1292642) - No such process 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1292642 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:53.351 rmmod nvme_tcp 00:18:53.351 rmmod nvme_fabrics 00:18:53.351 rmmod nvme_keyring 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 1292601 ']' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 1292601 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1292601 ']' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1292601 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1292601 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1292601' 00:18:53.351 killing process with pid 1292601 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1292601 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1292601 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@254 -- # local dev 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:53.351 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # return 0 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@274 -- # iptr 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-save 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:18:55.900 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-restore 00:18:55.900 00:18:55.900 real 0m21.397s 00:18:55.900 user 0m42.301s 00:18:55.900 sys 0m9.354s 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:18:55.901 ************************************ 00:18:55.901 END TEST nvmf_connect_stress 00:18:55.901 ************************************ 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:55.901 ************************************ 00:18:55.901 START TEST nvmf_fused_ordering 00:18:55.901 ************************************ 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:18:55.901 * Looking for test storage... 00:18:55.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:55.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.901 --rc genhtml_branch_coverage=1 00:18:55.901 --rc genhtml_function_coverage=1 00:18:55.901 --rc genhtml_legend=1 00:18:55.901 --rc geninfo_all_blocks=1 00:18:55.901 --rc geninfo_unexecuted_blocks=1 00:18:55.901 00:18:55.901 ' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:55.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.901 --rc genhtml_branch_coverage=1 00:18:55.901 --rc genhtml_function_coverage=1 00:18:55.901 --rc genhtml_legend=1 00:18:55.901 --rc geninfo_all_blocks=1 00:18:55.901 --rc geninfo_unexecuted_blocks=1 00:18:55.901 00:18:55.901 ' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:55.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.901 --rc genhtml_branch_coverage=1 00:18:55.901 --rc genhtml_function_coverage=1 00:18:55.901 --rc genhtml_legend=1 00:18:55.901 --rc geninfo_all_blocks=1 00:18:55.901 --rc geninfo_unexecuted_blocks=1 00:18:55.901 00:18:55.901 ' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:55.901 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:55.901 --rc genhtml_branch_coverage=1 00:18:55.901 --rc genhtml_function_coverage=1 00:18:55.901 --rc genhtml_legend=1 00:18:55.901 --rc geninfo_all_blocks=1 00:18:55.901 --rc geninfo_unexecuted_blocks=1 00:18:55.901 00:18:55.901 ' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:55.901 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:55.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:18:55.902 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:04.044 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:04.045 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:04.045 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:04.045 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:04.045 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@247 -- # create_target_ns 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:04.045 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:04.045 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:04.046 10.0.0.1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:04.046 10.0.0.2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:04.046 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:04.046 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.519 ms 00:19:04.046 00:19:04.046 --- 10.0.0.1 ping statistics --- 00:19:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.046 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:04.046 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:04.046 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:19:04.046 00:19:04.046 --- 10.0.0.2 ping statistics --- 00:19:04.046 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:04.046 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:04.046 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:04.047 ' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=1299029 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 1299029 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1299029 ']' 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.047 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.047 [2024-12-05 12:02:28.477076] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:04.047 [2024-12-05 12:02:28.477143] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.047 [2024-12-05 12:02:28.577565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.047 [2024-12-05 12:02:28.628400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.047 [2024-12-05 12:02:28.628463] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.047 [2024-12-05 12:02:28.628472] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.047 [2024-12-05 12:02:28.628480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.047 [2024-12-05 12:02:28.628486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.047 [2024-12-05 12:02:28.629253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.309 [2024-12-05 12:02:29.342315] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.309 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 [2024-12-05 12:02:29.366658] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 NULL1 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.570 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:04.570 [2024-12-05 12:02:29.438099] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:04.570 [2024-12-05 12:02:29.438138] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1299177 ] 00:19:04.831 Attached to nqn.2016-06.io.spdk:cnode1 00:19:04.831 Namespace ID: 1 size: 1GB 00:19:04.831 fused_ordering(0) 00:19:04.831 fused_ordering(1) 00:19:04.831 fused_ordering(2) 00:19:04.831 fused_ordering(3) 00:19:04.831 fused_ordering(4) 00:19:04.831 fused_ordering(5) 00:19:04.831 fused_ordering(6) 00:19:04.831 fused_ordering(7) 00:19:04.831 fused_ordering(8) 00:19:04.831 fused_ordering(9) 00:19:04.831 fused_ordering(10) 00:19:04.831 fused_ordering(11) 00:19:04.831 fused_ordering(12) 00:19:04.831 fused_ordering(13) 00:19:04.831 fused_ordering(14) 00:19:04.831 fused_ordering(15) 00:19:04.831 fused_ordering(16) 00:19:04.831 fused_ordering(17) 00:19:04.831 fused_ordering(18) 00:19:04.831 fused_ordering(19) 00:19:04.831 fused_ordering(20) 00:19:04.831 fused_ordering(21) 00:19:04.831 fused_ordering(22) 00:19:04.831 fused_ordering(23) 00:19:04.831 fused_ordering(24) 00:19:04.831 fused_ordering(25) 00:19:04.831 fused_ordering(26) 00:19:04.831 fused_ordering(27) 00:19:04.831 fused_ordering(28) 00:19:04.831 fused_ordering(29) 00:19:04.831 fused_ordering(30) 00:19:04.831 fused_ordering(31) 00:19:04.831 fused_ordering(32) 00:19:04.831 fused_ordering(33) 00:19:04.831 fused_ordering(34) 00:19:04.831 fused_ordering(35) 00:19:04.831 fused_ordering(36) 00:19:04.831 fused_ordering(37) 00:19:04.831 fused_ordering(38) 00:19:04.831 fused_ordering(39) 00:19:04.831 fused_ordering(40) 00:19:04.831 fused_ordering(41) 00:19:04.831 fused_ordering(42) 00:19:04.831 fused_ordering(43) 00:19:04.831 fused_ordering(44) 00:19:04.831 fused_ordering(45) 00:19:04.831 fused_ordering(46) 00:19:04.831 fused_ordering(47) 00:19:04.831 fused_ordering(48) 00:19:04.831 fused_ordering(49) 00:19:04.831 fused_ordering(50) 00:19:04.831 fused_ordering(51) 00:19:04.831 fused_ordering(52) 00:19:04.831 fused_ordering(53) 00:19:04.831 fused_ordering(54) 00:19:04.831 fused_ordering(55) 00:19:04.831 fused_ordering(56) 00:19:04.831 fused_ordering(57) 00:19:04.831 fused_ordering(58) 00:19:04.831 fused_ordering(59) 00:19:04.831 fused_ordering(60) 00:19:04.831 fused_ordering(61) 00:19:04.831 fused_ordering(62) 00:19:04.831 fused_ordering(63) 00:19:04.831 fused_ordering(64) 00:19:04.831 fused_ordering(65) 00:19:04.831 fused_ordering(66) 00:19:04.831 fused_ordering(67) 00:19:04.831 fused_ordering(68) 00:19:04.831 fused_ordering(69) 00:19:04.831 fused_ordering(70) 00:19:04.831 fused_ordering(71) 00:19:04.831 fused_ordering(72) 00:19:04.831 fused_ordering(73) 00:19:04.831 fused_ordering(74) 00:19:04.831 fused_ordering(75) 00:19:04.831 fused_ordering(76) 00:19:04.831 fused_ordering(77) 00:19:04.831 fused_ordering(78) 00:19:04.831 fused_ordering(79) 00:19:04.831 fused_ordering(80) 00:19:04.831 fused_ordering(81) 00:19:04.831 fused_ordering(82) 00:19:04.831 fused_ordering(83) 00:19:04.831 fused_ordering(84) 00:19:04.831 fused_ordering(85) 00:19:04.831 fused_ordering(86) 00:19:04.831 fused_ordering(87) 00:19:04.831 fused_ordering(88) 00:19:04.831 fused_ordering(89) 00:19:04.831 fused_ordering(90) 00:19:04.831 fused_ordering(91) 00:19:04.831 fused_ordering(92) 00:19:04.831 fused_ordering(93) 00:19:04.831 fused_ordering(94) 00:19:04.831 fused_ordering(95) 00:19:04.831 fused_ordering(96) 00:19:04.831 fused_ordering(97) 00:19:04.831 fused_ordering(98) 00:19:04.831 fused_ordering(99) 00:19:04.832 fused_ordering(100) 00:19:04.832 fused_ordering(101) 00:19:04.832 fused_ordering(102) 00:19:04.832 fused_ordering(103) 00:19:04.832 fused_ordering(104) 00:19:04.832 fused_ordering(105) 00:19:04.832 fused_ordering(106) 00:19:04.832 fused_ordering(107) 00:19:04.832 fused_ordering(108) 00:19:04.832 fused_ordering(109) 00:19:04.832 fused_ordering(110) 00:19:04.832 fused_ordering(111) 00:19:04.832 fused_ordering(112) 00:19:04.832 fused_ordering(113) 00:19:04.832 fused_ordering(114) 00:19:04.832 fused_ordering(115) 00:19:04.832 fused_ordering(116) 00:19:04.832 fused_ordering(117) 00:19:04.832 fused_ordering(118) 00:19:04.832 fused_ordering(119) 00:19:04.832 fused_ordering(120) 00:19:04.832 fused_ordering(121) 00:19:04.832 fused_ordering(122) 00:19:04.832 fused_ordering(123) 00:19:04.832 fused_ordering(124) 00:19:04.832 fused_ordering(125) 00:19:04.832 fused_ordering(126) 00:19:04.832 fused_ordering(127) 00:19:04.832 fused_ordering(128) 00:19:04.832 fused_ordering(129) 00:19:04.832 fused_ordering(130) 00:19:04.832 fused_ordering(131) 00:19:04.832 fused_ordering(132) 00:19:04.832 fused_ordering(133) 00:19:04.832 fused_ordering(134) 00:19:04.832 fused_ordering(135) 00:19:04.832 fused_ordering(136) 00:19:04.832 fused_ordering(137) 00:19:04.832 fused_ordering(138) 00:19:04.832 fused_ordering(139) 00:19:04.832 fused_ordering(140) 00:19:04.832 fused_ordering(141) 00:19:04.832 fused_ordering(142) 00:19:04.832 fused_ordering(143) 00:19:04.832 fused_ordering(144) 00:19:04.832 fused_ordering(145) 00:19:04.832 fused_ordering(146) 00:19:04.832 fused_ordering(147) 00:19:04.832 fused_ordering(148) 00:19:04.832 fused_ordering(149) 00:19:04.832 fused_ordering(150) 00:19:04.832 fused_ordering(151) 00:19:04.832 fused_ordering(152) 00:19:04.832 fused_ordering(153) 00:19:04.832 fused_ordering(154) 00:19:04.832 fused_ordering(155) 00:19:04.832 fused_ordering(156) 00:19:04.832 fused_ordering(157) 00:19:04.832 fused_ordering(158) 00:19:04.832 fused_ordering(159) 00:19:04.832 fused_ordering(160) 00:19:04.832 fused_ordering(161) 00:19:04.832 fused_ordering(162) 00:19:04.832 fused_ordering(163) 00:19:04.832 fused_ordering(164) 00:19:04.832 fused_ordering(165) 00:19:04.832 fused_ordering(166) 00:19:04.832 fused_ordering(167) 00:19:04.832 fused_ordering(168) 00:19:04.832 fused_ordering(169) 00:19:04.832 fused_ordering(170) 00:19:04.832 fused_ordering(171) 00:19:04.832 fused_ordering(172) 00:19:04.832 fused_ordering(173) 00:19:04.832 fused_ordering(174) 00:19:04.832 fused_ordering(175) 00:19:04.832 fused_ordering(176) 00:19:04.832 fused_ordering(177) 00:19:04.832 fused_ordering(178) 00:19:04.832 fused_ordering(179) 00:19:04.832 fused_ordering(180) 00:19:04.832 fused_ordering(181) 00:19:04.832 fused_ordering(182) 00:19:04.832 fused_ordering(183) 00:19:04.832 fused_ordering(184) 00:19:04.832 fused_ordering(185) 00:19:04.832 fused_ordering(186) 00:19:04.832 fused_ordering(187) 00:19:04.832 fused_ordering(188) 00:19:04.832 fused_ordering(189) 00:19:04.832 fused_ordering(190) 00:19:04.832 fused_ordering(191) 00:19:04.832 fused_ordering(192) 00:19:04.832 fused_ordering(193) 00:19:04.832 fused_ordering(194) 00:19:04.832 fused_ordering(195) 00:19:04.832 fused_ordering(196) 00:19:04.832 fused_ordering(197) 00:19:04.832 fused_ordering(198) 00:19:04.832 fused_ordering(199) 00:19:04.832 fused_ordering(200) 00:19:04.832 fused_ordering(201) 00:19:04.832 fused_ordering(202) 00:19:04.832 fused_ordering(203) 00:19:04.832 fused_ordering(204) 00:19:04.832 fused_ordering(205) 00:19:05.423 fused_ordering(206) 00:19:05.423 fused_ordering(207) 00:19:05.423 fused_ordering(208) 00:19:05.423 fused_ordering(209) 00:19:05.423 fused_ordering(210) 00:19:05.423 fused_ordering(211) 00:19:05.423 fused_ordering(212) 00:19:05.423 fused_ordering(213) 00:19:05.423 fused_ordering(214) 00:19:05.423 fused_ordering(215) 00:19:05.423 fused_ordering(216) 00:19:05.423 fused_ordering(217) 00:19:05.423 fused_ordering(218) 00:19:05.423 fused_ordering(219) 00:19:05.423 fused_ordering(220) 00:19:05.423 fused_ordering(221) 00:19:05.423 fused_ordering(222) 00:19:05.423 fused_ordering(223) 00:19:05.423 fused_ordering(224) 00:19:05.423 fused_ordering(225) 00:19:05.423 fused_ordering(226) 00:19:05.423 fused_ordering(227) 00:19:05.423 fused_ordering(228) 00:19:05.423 fused_ordering(229) 00:19:05.423 fused_ordering(230) 00:19:05.423 fused_ordering(231) 00:19:05.423 fused_ordering(232) 00:19:05.423 fused_ordering(233) 00:19:05.423 fused_ordering(234) 00:19:05.423 fused_ordering(235) 00:19:05.423 fused_ordering(236) 00:19:05.423 fused_ordering(237) 00:19:05.423 fused_ordering(238) 00:19:05.423 fused_ordering(239) 00:19:05.423 fused_ordering(240) 00:19:05.423 fused_ordering(241) 00:19:05.423 fused_ordering(242) 00:19:05.423 fused_ordering(243) 00:19:05.423 fused_ordering(244) 00:19:05.423 fused_ordering(245) 00:19:05.423 fused_ordering(246) 00:19:05.423 fused_ordering(247) 00:19:05.423 fused_ordering(248) 00:19:05.423 fused_ordering(249) 00:19:05.423 fused_ordering(250) 00:19:05.423 fused_ordering(251) 00:19:05.423 fused_ordering(252) 00:19:05.423 fused_ordering(253) 00:19:05.423 fused_ordering(254) 00:19:05.423 fused_ordering(255) 00:19:05.423 fused_ordering(256) 00:19:05.423 fused_ordering(257) 00:19:05.423 fused_ordering(258) 00:19:05.423 fused_ordering(259) 00:19:05.423 fused_ordering(260) 00:19:05.423 fused_ordering(261) 00:19:05.423 fused_ordering(262) 00:19:05.423 fused_ordering(263) 00:19:05.423 fused_ordering(264) 00:19:05.423 fused_ordering(265) 00:19:05.423 fused_ordering(266) 00:19:05.423 fused_ordering(267) 00:19:05.423 fused_ordering(268) 00:19:05.423 fused_ordering(269) 00:19:05.423 fused_ordering(270) 00:19:05.423 fused_ordering(271) 00:19:05.423 fused_ordering(272) 00:19:05.423 fused_ordering(273) 00:19:05.423 fused_ordering(274) 00:19:05.423 fused_ordering(275) 00:19:05.423 fused_ordering(276) 00:19:05.423 fused_ordering(277) 00:19:05.423 fused_ordering(278) 00:19:05.423 fused_ordering(279) 00:19:05.423 fused_ordering(280) 00:19:05.423 fused_ordering(281) 00:19:05.423 fused_ordering(282) 00:19:05.423 fused_ordering(283) 00:19:05.423 fused_ordering(284) 00:19:05.423 fused_ordering(285) 00:19:05.423 fused_ordering(286) 00:19:05.423 fused_ordering(287) 00:19:05.423 fused_ordering(288) 00:19:05.423 fused_ordering(289) 00:19:05.423 fused_ordering(290) 00:19:05.423 fused_ordering(291) 00:19:05.423 fused_ordering(292) 00:19:05.423 fused_ordering(293) 00:19:05.423 fused_ordering(294) 00:19:05.423 fused_ordering(295) 00:19:05.423 fused_ordering(296) 00:19:05.423 fused_ordering(297) 00:19:05.423 fused_ordering(298) 00:19:05.423 fused_ordering(299) 00:19:05.423 fused_ordering(300) 00:19:05.424 fused_ordering(301) 00:19:05.424 fused_ordering(302) 00:19:05.424 fused_ordering(303) 00:19:05.424 fused_ordering(304) 00:19:05.424 fused_ordering(305) 00:19:05.424 fused_ordering(306) 00:19:05.424 fused_ordering(307) 00:19:05.424 fused_ordering(308) 00:19:05.424 fused_ordering(309) 00:19:05.424 fused_ordering(310) 00:19:05.424 fused_ordering(311) 00:19:05.424 fused_ordering(312) 00:19:05.424 fused_ordering(313) 00:19:05.424 fused_ordering(314) 00:19:05.424 fused_ordering(315) 00:19:05.424 fused_ordering(316) 00:19:05.424 fused_ordering(317) 00:19:05.424 fused_ordering(318) 00:19:05.424 fused_ordering(319) 00:19:05.424 fused_ordering(320) 00:19:05.424 fused_ordering(321) 00:19:05.424 fused_ordering(322) 00:19:05.424 fused_ordering(323) 00:19:05.424 fused_ordering(324) 00:19:05.424 fused_ordering(325) 00:19:05.424 fused_ordering(326) 00:19:05.424 fused_ordering(327) 00:19:05.424 fused_ordering(328) 00:19:05.424 fused_ordering(329) 00:19:05.424 fused_ordering(330) 00:19:05.424 fused_ordering(331) 00:19:05.424 fused_ordering(332) 00:19:05.424 fused_ordering(333) 00:19:05.424 fused_ordering(334) 00:19:05.424 fused_ordering(335) 00:19:05.424 fused_ordering(336) 00:19:05.424 fused_ordering(337) 00:19:05.424 fused_ordering(338) 00:19:05.424 fused_ordering(339) 00:19:05.424 fused_ordering(340) 00:19:05.424 fused_ordering(341) 00:19:05.424 fused_ordering(342) 00:19:05.424 fused_ordering(343) 00:19:05.424 fused_ordering(344) 00:19:05.424 fused_ordering(345) 00:19:05.424 fused_ordering(346) 00:19:05.424 fused_ordering(347) 00:19:05.424 fused_ordering(348) 00:19:05.424 fused_ordering(349) 00:19:05.424 fused_ordering(350) 00:19:05.424 fused_ordering(351) 00:19:05.424 fused_ordering(352) 00:19:05.424 fused_ordering(353) 00:19:05.424 fused_ordering(354) 00:19:05.424 fused_ordering(355) 00:19:05.424 fused_ordering(356) 00:19:05.424 fused_ordering(357) 00:19:05.424 fused_ordering(358) 00:19:05.424 fused_ordering(359) 00:19:05.424 fused_ordering(360) 00:19:05.424 fused_ordering(361) 00:19:05.424 fused_ordering(362) 00:19:05.424 fused_ordering(363) 00:19:05.424 fused_ordering(364) 00:19:05.424 fused_ordering(365) 00:19:05.424 fused_ordering(366) 00:19:05.424 fused_ordering(367) 00:19:05.424 fused_ordering(368) 00:19:05.424 fused_ordering(369) 00:19:05.424 fused_ordering(370) 00:19:05.424 fused_ordering(371) 00:19:05.424 fused_ordering(372) 00:19:05.424 fused_ordering(373) 00:19:05.424 fused_ordering(374) 00:19:05.424 fused_ordering(375) 00:19:05.424 fused_ordering(376) 00:19:05.424 fused_ordering(377) 00:19:05.424 fused_ordering(378) 00:19:05.424 fused_ordering(379) 00:19:05.424 fused_ordering(380) 00:19:05.424 fused_ordering(381) 00:19:05.424 fused_ordering(382) 00:19:05.424 fused_ordering(383) 00:19:05.424 fused_ordering(384) 00:19:05.424 fused_ordering(385) 00:19:05.424 fused_ordering(386) 00:19:05.424 fused_ordering(387) 00:19:05.424 fused_ordering(388) 00:19:05.424 fused_ordering(389) 00:19:05.424 fused_ordering(390) 00:19:05.424 fused_ordering(391) 00:19:05.424 fused_ordering(392) 00:19:05.424 fused_ordering(393) 00:19:05.424 fused_ordering(394) 00:19:05.424 fused_ordering(395) 00:19:05.424 fused_ordering(396) 00:19:05.424 fused_ordering(397) 00:19:05.424 fused_ordering(398) 00:19:05.424 fused_ordering(399) 00:19:05.424 fused_ordering(400) 00:19:05.424 fused_ordering(401) 00:19:05.424 fused_ordering(402) 00:19:05.424 fused_ordering(403) 00:19:05.424 fused_ordering(404) 00:19:05.424 fused_ordering(405) 00:19:05.424 fused_ordering(406) 00:19:05.424 fused_ordering(407) 00:19:05.424 fused_ordering(408) 00:19:05.424 fused_ordering(409) 00:19:05.424 fused_ordering(410) 00:19:05.684 fused_ordering(411) 00:19:05.684 fused_ordering(412) 00:19:05.684 fused_ordering(413) 00:19:05.684 fused_ordering(414) 00:19:05.684 fused_ordering(415) 00:19:05.684 fused_ordering(416) 00:19:05.684 fused_ordering(417) 00:19:05.684 fused_ordering(418) 00:19:05.684 fused_ordering(419) 00:19:05.684 fused_ordering(420) 00:19:05.684 fused_ordering(421) 00:19:05.684 fused_ordering(422) 00:19:05.684 fused_ordering(423) 00:19:05.684 fused_ordering(424) 00:19:05.684 fused_ordering(425) 00:19:05.684 fused_ordering(426) 00:19:05.684 fused_ordering(427) 00:19:05.684 fused_ordering(428) 00:19:05.684 fused_ordering(429) 00:19:05.684 fused_ordering(430) 00:19:05.684 fused_ordering(431) 00:19:05.684 fused_ordering(432) 00:19:05.684 fused_ordering(433) 00:19:05.684 fused_ordering(434) 00:19:05.684 fused_ordering(435) 00:19:05.684 fused_ordering(436) 00:19:05.684 fused_ordering(437) 00:19:05.684 fused_ordering(438) 00:19:05.684 fused_ordering(439) 00:19:05.684 fused_ordering(440) 00:19:05.684 fused_ordering(441) 00:19:05.684 fused_ordering(442) 00:19:05.684 fused_ordering(443) 00:19:05.684 fused_ordering(444) 00:19:05.684 fused_ordering(445) 00:19:05.684 fused_ordering(446) 00:19:05.684 fused_ordering(447) 00:19:05.684 fused_ordering(448) 00:19:05.684 fused_ordering(449) 00:19:05.684 fused_ordering(450) 00:19:05.684 fused_ordering(451) 00:19:05.684 fused_ordering(452) 00:19:05.684 fused_ordering(453) 00:19:05.684 fused_ordering(454) 00:19:05.684 fused_ordering(455) 00:19:05.684 fused_ordering(456) 00:19:05.684 fused_ordering(457) 00:19:05.684 fused_ordering(458) 00:19:05.684 fused_ordering(459) 00:19:05.684 fused_ordering(460) 00:19:05.684 fused_ordering(461) 00:19:05.684 fused_ordering(462) 00:19:05.684 fused_ordering(463) 00:19:05.684 fused_ordering(464) 00:19:05.684 fused_ordering(465) 00:19:05.684 fused_ordering(466) 00:19:05.684 fused_ordering(467) 00:19:05.684 fused_ordering(468) 00:19:05.684 fused_ordering(469) 00:19:05.684 fused_ordering(470) 00:19:05.684 fused_ordering(471) 00:19:05.684 fused_ordering(472) 00:19:05.684 fused_ordering(473) 00:19:05.684 fused_ordering(474) 00:19:05.684 fused_ordering(475) 00:19:05.684 fused_ordering(476) 00:19:05.684 fused_ordering(477) 00:19:05.684 fused_ordering(478) 00:19:05.684 fused_ordering(479) 00:19:05.684 fused_ordering(480) 00:19:05.684 fused_ordering(481) 00:19:05.684 fused_ordering(482) 00:19:05.684 fused_ordering(483) 00:19:05.684 fused_ordering(484) 00:19:05.684 fused_ordering(485) 00:19:05.684 fused_ordering(486) 00:19:05.684 fused_ordering(487) 00:19:05.684 fused_ordering(488) 00:19:05.684 fused_ordering(489) 00:19:05.684 fused_ordering(490) 00:19:05.684 fused_ordering(491) 00:19:05.684 fused_ordering(492) 00:19:05.684 fused_ordering(493) 00:19:05.684 fused_ordering(494) 00:19:05.684 fused_ordering(495) 00:19:05.684 fused_ordering(496) 00:19:05.684 fused_ordering(497) 00:19:05.684 fused_ordering(498) 00:19:05.684 fused_ordering(499) 00:19:05.684 fused_ordering(500) 00:19:05.684 fused_ordering(501) 00:19:05.685 fused_ordering(502) 00:19:05.685 fused_ordering(503) 00:19:05.685 fused_ordering(504) 00:19:05.685 fused_ordering(505) 00:19:05.685 fused_ordering(506) 00:19:05.685 fused_ordering(507) 00:19:05.685 fused_ordering(508) 00:19:05.685 fused_ordering(509) 00:19:05.685 fused_ordering(510) 00:19:05.685 fused_ordering(511) 00:19:05.685 fused_ordering(512) 00:19:05.685 fused_ordering(513) 00:19:05.685 fused_ordering(514) 00:19:05.685 fused_ordering(515) 00:19:05.685 fused_ordering(516) 00:19:05.685 fused_ordering(517) 00:19:05.685 fused_ordering(518) 00:19:05.685 fused_ordering(519) 00:19:05.685 fused_ordering(520) 00:19:05.685 fused_ordering(521) 00:19:05.685 fused_ordering(522) 00:19:05.685 fused_ordering(523) 00:19:05.685 fused_ordering(524) 00:19:05.685 fused_ordering(525) 00:19:05.685 fused_ordering(526) 00:19:05.685 fused_ordering(527) 00:19:05.685 fused_ordering(528) 00:19:05.685 fused_ordering(529) 00:19:05.685 fused_ordering(530) 00:19:05.685 fused_ordering(531) 00:19:05.685 fused_ordering(532) 00:19:05.685 fused_ordering(533) 00:19:05.685 fused_ordering(534) 00:19:05.685 fused_ordering(535) 00:19:05.685 fused_ordering(536) 00:19:05.685 fused_ordering(537) 00:19:05.685 fused_ordering(538) 00:19:05.685 fused_ordering(539) 00:19:05.685 fused_ordering(540) 00:19:05.685 fused_ordering(541) 00:19:05.685 fused_ordering(542) 00:19:05.685 fused_ordering(543) 00:19:05.685 fused_ordering(544) 00:19:05.685 fused_ordering(545) 00:19:05.685 fused_ordering(546) 00:19:05.685 fused_ordering(547) 00:19:05.685 fused_ordering(548) 00:19:05.685 fused_ordering(549) 00:19:05.685 fused_ordering(550) 00:19:05.685 fused_ordering(551) 00:19:05.685 fused_ordering(552) 00:19:05.685 fused_ordering(553) 00:19:05.685 fused_ordering(554) 00:19:05.685 fused_ordering(555) 00:19:05.685 fused_ordering(556) 00:19:05.685 fused_ordering(557) 00:19:05.685 fused_ordering(558) 00:19:05.685 fused_ordering(559) 00:19:05.685 fused_ordering(560) 00:19:05.685 fused_ordering(561) 00:19:05.685 fused_ordering(562) 00:19:05.685 fused_ordering(563) 00:19:05.685 fused_ordering(564) 00:19:05.685 fused_ordering(565) 00:19:05.685 fused_ordering(566) 00:19:05.685 fused_ordering(567) 00:19:05.685 fused_ordering(568) 00:19:05.685 fused_ordering(569) 00:19:05.685 fused_ordering(570) 00:19:05.685 fused_ordering(571) 00:19:05.685 fused_ordering(572) 00:19:05.685 fused_ordering(573) 00:19:05.685 fused_ordering(574) 00:19:05.685 fused_ordering(575) 00:19:05.685 fused_ordering(576) 00:19:05.685 fused_ordering(577) 00:19:05.685 fused_ordering(578) 00:19:05.685 fused_ordering(579) 00:19:05.685 fused_ordering(580) 00:19:05.685 fused_ordering(581) 00:19:05.685 fused_ordering(582) 00:19:05.685 fused_ordering(583) 00:19:05.685 fused_ordering(584) 00:19:05.685 fused_ordering(585) 00:19:05.685 fused_ordering(586) 00:19:05.685 fused_ordering(587) 00:19:05.685 fused_ordering(588) 00:19:05.685 fused_ordering(589) 00:19:05.685 fused_ordering(590) 00:19:05.685 fused_ordering(591) 00:19:05.685 fused_ordering(592) 00:19:05.685 fused_ordering(593) 00:19:05.685 fused_ordering(594) 00:19:05.685 fused_ordering(595) 00:19:05.685 fused_ordering(596) 00:19:05.685 fused_ordering(597) 00:19:05.685 fused_ordering(598) 00:19:05.685 fused_ordering(599) 00:19:05.685 fused_ordering(600) 00:19:05.685 fused_ordering(601) 00:19:05.685 fused_ordering(602) 00:19:05.685 fused_ordering(603) 00:19:05.685 fused_ordering(604) 00:19:05.685 fused_ordering(605) 00:19:05.685 fused_ordering(606) 00:19:05.685 fused_ordering(607) 00:19:05.685 fused_ordering(608) 00:19:05.685 fused_ordering(609) 00:19:05.685 fused_ordering(610) 00:19:05.685 fused_ordering(611) 00:19:05.685 fused_ordering(612) 00:19:05.685 fused_ordering(613) 00:19:05.685 fused_ordering(614) 00:19:05.685 fused_ordering(615) 00:19:06.255 fused_ordering(616) 00:19:06.255 fused_ordering(617) 00:19:06.255 fused_ordering(618) 00:19:06.255 fused_ordering(619) 00:19:06.255 fused_ordering(620) 00:19:06.255 fused_ordering(621) 00:19:06.255 fused_ordering(622) 00:19:06.255 fused_ordering(623) 00:19:06.255 fused_ordering(624) 00:19:06.255 fused_ordering(625) 00:19:06.255 fused_ordering(626) 00:19:06.255 fused_ordering(627) 00:19:06.255 fused_ordering(628) 00:19:06.255 fused_ordering(629) 00:19:06.255 fused_ordering(630) 00:19:06.255 fused_ordering(631) 00:19:06.255 fused_ordering(632) 00:19:06.255 fused_ordering(633) 00:19:06.255 fused_ordering(634) 00:19:06.255 fused_ordering(635) 00:19:06.255 fused_ordering(636) 00:19:06.255 fused_ordering(637) 00:19:06.255 fused_ordering(638) 00:19:06.255 fused_ordering(639) 00:19:06.255 fused_ordering(640) 00:19:06.255 fused_ordering(641) 00:19:06.255 fused_ordering(642) 00:19:06.255 fused_ordering(643) 00:19:06.255 fused_ordering(644) 00:19:06.255 fused_ordering(645) 00:19:06.255 fused_ordering(646) 00:19:06.255 fused_ordering(647) 00:19:06.255 fused_ordering(648) 00:19:06.255 fused_ordering(649) 00:19:06.255 fused_ordering(650) 00:19:06.255 fused_ordering(651) 00:19:06.255 fused_ordering(652) 00:19:06.255 fused_ordering(653) 00:19:06.255 fused_ordering(654) 00:19:06.255 fused_ordering(655) 00:19:06.255 fused_ordering(656) 00:19:06.255 fused_ordering(657) 00:19:06.255 fused_ordering(658) 00:19:06.255 fused_ordering(659) 00:19:06.255 fused_ordering(660) 00:19:06.255 fused_ordering(661) 00:19:06.255 fused_ordering(662) 00:19:06.255 fused_ordering(663) 00:19:06.255 fused_ordering(664) 00:19:06.255 fused_ordering(665) 00:19:06.255 fused_ordering(666) 00:19:06.255 fused_ordering(667) 00:19:06.255 fused_ordering(668) 00:19:06.255 fused_ordering(669) 00:19:06.255 fused_ordering(670) 00:19:06.255 fused_ordering(671) 00:19:06.255 fused_ordering(672) 00:19:06.255 fused_ordering(673) 00:19:06.255 fused_ordering(674) 00:19:06.255 fused_ordering(675) 00:19:06.255 fused_ordering(676) 00:19:06.255 fused_ordering(677) 00:19:06.255 fused_ordering(678) 00:19:06.255 fused_ordering(679) 00:19:06.255 fused_ordering(680) 00:19:06.255 fused_ordering(681) 00:19:06.255 fused_ordering(682) 00:19:06.255 fused_ordering(683) 00:19:06.256 fused_ordering(684) 00:19:06.256 fused_ordering(685) 00:19:06.256 fused_ordering(686) 00:19:06.256 fused_ordering(687) 00:19:06.256 fused_ordering(688) 00:19:06.256 fused_ordering(689) 00:19:06.256 fused_ordering(690) 00:19:06.256 fused_ordering(691) 00:19:06.256 fused_ordering(692) 00:19:06.256 fused_ordering(693) 00:19:06.256 fused_ordering(694) 00:19:06.256 fused_ordering(695) 00:19:06.256 fused_ordering(696) 00:19:06.256 fused_ordering(697) 00:19:06.256 fused_ordering(698) 00:19:06.256 fused_ordering(699) 00:19:06.256 fused_ordering(700) 00:19:06.256 fused_ordering(701) 00:19:06.256 fused_ordering(702) 00:19:06.256 fused_ordering(703) 00:19:06.256 fused_ordering(704) 00:19:06.256 fused_ordering(705) 00:19:06.256 fused_ordering(706) 00:19:06.256 fused_ordering(707) 00:19:06.256 fused_ordering(708) 00:19:06.256 fused_ordering(709) 00:19:06.256 fused_ordering(710) 00:19:06.256 fused_ordering(711) 00:19:06.256 fused_ordering(712) 00:19:06.256 fused_ordering(713) 00:19:06.256 fused_ordering(714) 00:19:06.256 fused_ordering(715) 00:19:06.256 fused_ordering(716) 00:19:06.256 fused_ordering(717) 00:19:06.256 fused_ordering(718) 00:19:06.256 fused_ordering(719) 00:19:06.256 fused_ordering(720) 00:19:06.256 fused_ordering(721) 00:19:06.256 fused_ordering(722) 00:19:06.256 fused_ordering(723) 00:19:06.256 fused_ordering(724) 00:19:06.256 fused_ordering(725) 00:19:06.256 fused_ordering(726) 00:19:06.256 fused_ordering(727) 00:19:06.256 fused_ordering(728) 00:19:06.256 fused_ordering(729) 00:19:06.256 fused_ordering(730) 00:19:06.256 fused_ordering(731) 00:19:06.256 fused_ordering(732) 00:19:06.256 fused_ordering(733) 00:19:06.256 fused_ordering(734) 00:19:06.256 fused_ordering(735) 00:19:06.256 fused_ordering(736) 00:19:06.256 fused_ordering(737) 00:19:06.256 fused_ordering(738) 00:19:06.256 fused_ordering(739) 00:19:06.256 fused_ordering(740) 00:19:06.256 fused_ordering(741) 00:19:06.256 fused_ordering(742) 00:19:06.256 fused_ordering(743) 00:19:06.256 fused_ordering(744) 00:19:06.256 fused_ordering(745) 00:19:06.256 fused_ordering(746) 00:19:06.256 fused_ordering(747) 00:19:06.256 fused_ordering(748) 00:19:06.256 fused_ordering(749) 00:19:06.256 fused_ordering(750) 00:19:06.256 fused_ordering(751) 00:19:06.256 fused_ordering(752) 00:19:06.256 fused_ordering(753) 00:19:06.256 fused_ordering(754) 00:19:06.256 fused_ordering(755) 00:19:06.256 fused_ordering(756) 00:19:06.256 fused_ordering(757) 00:19:06.256 fused_ordering(758) 00:19:06.256 fused_ordering(759) 00:19:06.256 fused_ordering(760) 00:19:06.256 fused_ordering(761) 00:19:06.256 fused_ordering(762) 00:19:06.256 fused_ordering(763) 00:19:06.256 fused_ordering(764) 00:19:06.256 fused_ordering(765) 00:19:06.256 fused_ordering(766) 00:19:06.256 fused_ordering(767) 00:19:06.256 fused_ordering(768) 00:19:06.256 fused_ordering(769) 00:19:06.256 fused_ordering(770) 00:19:06.256 fused_ordering(771) 00:19:06.256 fused_ordering(772) 00:19:06.256 fused_ordering(773) 00:19:06.256 fused_ordering(774) 00:19:06.256 fused_ordering(775) 00:19:06.256 fused_ordering(776) 00:19:06.256 fused_ordering(777) 00:19:06.256 fused_ordering(778) 00:19:06.256 fused_ordering(779) 00:19:06.256 fused_ordering(780) 00:19:06.256 fused_ordering(781) 00:19:06.256 fused_ordering(782) 00:19:06.256 fused_ordering(783) 00:19:06.256 fused_ordering(784) 00:19:06.256 fused_ordering(785) 00:19:06.256 fused_ordering(786) 00:19:06.256 fused_ordering(787) 00:19:06.256 fused_ordering(788) 00:19:06.256 fused_ordering(789) 00:19:06.256 fused_ordering(790) 00:19:06.256 fused_ordering(791) 00:19:06.256 fused_ordering(792) 00:19:06.256 fused_ordering(793) 00:19:06.256 fused_ordering(794) 00:19:06.256 fused_ordering(795) 00:19:06.256 fused_ordering(796) 00:19:06.256 fused_ordering(797) 00:19:06.256 fused_ordering(798) 00:19:06.256 fused_ordering(799) 00:19:06.256 fused_ordering(800) 00:19:06.256 fused_ordering(801) 00:19:06.256 fused_ordering(802) 00:19:06.256 fused_ordering(803) 00:19:06.256 fused_ordering(804) 00:19:06.256 fused_ordering(805) 00:19:06.256 fused_ordering(806) 00:19:06.256 fused_ordering(807) 00:19:06.256 fused_ordering(808) 00:19:06.256 fused_ordering(809) 00:19:06.256 fused_ordering(810) 00:19:06.256 fused_ordering(811) 00:19:06.256 fused_ordering(812) 00:19:06.256 fused_ordering(813) 00:19:06.256 fused_ordering(814) 00:19:06.256 fused_ordering(815) 00:19:06.256 fused_ordering(816) 00:19:06.256 fused_ordering(817) 00:19:06.256 fused_ordering(818) 00:19:06.256 fused_ordering(819) 00:19:06.256 fused_ordering(820) 00:19:06.826 fused_ordering(821) 00:19:06.826 fused_ordering(822) 00:19:06.826 fused_ordering(823) 00:19:06.826 fused_ordering(824) 00:19:06.826 fused_ordering(825) 00:19:06.826 fused_ordering(826) 00:19:06.826 fused_ordering(827) 00:19:06.826 fused_ordering(828) 00:19:06.826 fused_ordering(829) 00:19:06.826 fused_ordering(830) 00:19:06.826 fused_ordering(831) 00:19:06.826 fused_ordering(832) 00:19:06.826 fused_ordering(833) 00:19:06.826 fused_ordering(834) 00:19:06.826 fused_ordering(835) 00:19:06.826 fused_ordering(836) 00:19:06.826 fused_ordering(837) 00:19:06.826 fused_ordering(838) 00:19:06.826 fused_ordering(839) 00:19:06.826 fused_ordering(840) 00:19:06.826 fused_ordering(841) 00:19:06.826 fused_ordering(842) 00:19:06.826 fused_ordering(843) 00:19:06.826 fused_ordering(844) 00:19:06.826 fused_ordering(845) 00:19:06.826 fused_ordering(846) 00:19:06.826 fused_ordering(847) 00:19:06.826 fused_ordering(848) 00:19:06.826 fused_ordering(849) 00:19:06.826 fused_ordering(850) 00:19:06.826 fused_ordering(851) 00:19:06.826 fused_ordering(852) 00:19:06.826 fused_ordering(853) 00:19:06.826 fused_ordering(854) 00:19:06.826 fused_ordering(855) 00:19:06.826 fused_ordering(856) 00:19:06.826 fused_ordering(857) 00:19:06.826 fused_ordering(858) 00:19:06.826 fused_ordering(859) 00:19:06.826 fused_ordering(860) 00:19:06.826 fused_ordering(861) 00:19:06.826 fused_ordering(862) 00:19:06.826 fused_ordering(863) 00:19:06.826 fused_ordering(864) 00:19:06.826 fused_ordering(865) 00:19:06.826 fused_ordering(866) 00:19:06.826 fused_ordering(867) 00:19:06.826 fused_ordering(868) 00:19:06.826 fused_ordering(869) 00:19:06.826 fused_ordering(870) 00:19:06.826 fused_ordering(871) 00:19:06.826 fused_ordering(872) 00:19:06.826 fused_ordering(873) 00:19:06.826 fused_ordering(874) 00:19:06.826 fused_ordering(875) 00:19:06.826 fused_ordering(876) 00:19:06.826 fused_ordering(877) 00:19:06.826 fused_ordering(878) 00:19:06.826 fused_ordering(879) 00:19:06.826 fused_ordering(880) 00:19:06.826 fused_ordering(881) 00:19:06.826 fused_ordering(882) 00:19:06.826 fused_ordering(883) 00:19:06.826 fused_ordering(884) 00:19:06.826 fused_ordering(885) 00:19:06.826 fused_ordering(886) 00:19:06.826 fused_ordering(887) 00:19:06.826 fused_ordering(888) 00:19:06.826 fused_ordering(889) 00:19:06.826 fused_ordering(890) 00:19:06.826 fused_ordering(891) 00:19:06.826 fused_ordering(892) 00:19:06.826 fused_ordering(893) 00:19:06.826 fused_ordering(894) 00:19:06.826 fused_ordering(895) 00:19:06.826 fused_ordering(896) 00:19:06.826 fused_ordering(897) 00:19:06.826 fused_ordering(898) 00:19:06.826 fused_ordering(899) 00:19:06.826 fused_ordering(900) 00:19:06.826 fused_ordering(901) 00:19:06.826 fused_ordering(902) 00:19:06.826 fused_ordering(903) 00:19:06.826 fused_ordering(904) 00:19:06.826 fused_ordering(905) 00:19:06.826 fused_ordering(906) 00:19:06.826 fused_ordering(907) 00:19:06.826 fused_ordering(908) 00:19:06.826 fused_ordering(909) 00:19:06.826 fused_ordering(910) 00:19:06.826 fused_ordering(911) 00:19:06.826 fused_ordering(912) 00:19:06.826 fused_ordering(913) 00:19:06.826 fused_ordering(914) 00:19:06.826 fused_ordering(915) 00:19:06.826 fused_ordering(916) 00:19:06.826 fused_ordering(917) 00:19:06.826 fused_ordering(918) 00:19:06.826 fused_ordering(919) 00:19:06.826 fused_ordering(920) 00:19:06.826 fused_ordering(921) 00:19:06.826 fused_ordering(922) 00:19:06.826 fused_ordering(923) 00:19:06.826 fused_ordering(924) 00:19:06.826 fused_ordering(925) 00:19:06.826 fused_ordering(926) 00:19:06.826 fused_ordering(927) 00:19:06.826 fused_ordering(928) 00:19:06.826 fused_ordering(929) 00:19:06.826 fused_ordering(930) 00:19:06.826 fused_ordering(931) 00:19:06.826 fused_ordering(932) 00:19:06.826 fused_ordering(933) 00:19:06.826 fused_ordering(934) 00:19:06.826 fused_ordering(935) 00:19:06.826 fused_ordering(936) 00:19:06.826 fused_ordering(937) 00:19:06.826 fused_ordering(938) 00:19:06.826 fused_ordering(939) 00:19:06.826 fused_ordering(940) 00:19:06.826 fused_ordering(941) 00:19:06.826 fused_ordering(942) 00:19:06.826 fused_ordering(943) 00:19:06.826 fused_ordering(944) 00:19:06.826 fused_ordering(945) 00:19:06.826 fused_ordering(946) 00:19:06.826 fused_ordering(947) 00:19:06.826 fused_ordering(948) 00:19:06.826 fused_ordering(949) 00:19:06.826 fused_ordering(950) 00:19:06.826 fused_ordering(951) 00:19:06.826 fused_ordering(952) 00:19:06.826 fused_ordering(953) 00:19:06.826 fused_ordering(954) 00:19:06.826 fused_ordering(955) 00:19:06.826 fused_ordering(956) 00:19:06.826 fused_ordering(957) 00:19:06.826 fused_ordering(958) 00:19:06.826 fused_ordering(959) 00:19:06.826 fused_ordering(960) 00:19:06.826 fused_ordering(961) 00:19:06.826 fused_ordering(962) 00:19:06.826 fused_ordering(963) 00:19:06.826 fused_ordering(964) 00:19:06.826 fused_ordering(965) 00:19:06.826 fused_ordering(966) 00:19:06.826 fused_ordering(967) 00:19:06.826 fused_ordering(968) 00:19:06.826 fused_ordering(969) 00:19:06.826 fused_ordering(970) 00:19:06.826 fused_ordering(971) 00:19:06.826 fused_ordering(972) 00:19:06.826 fused_ordering(973) 00:19:06.826 fused_ordering(974) 00:19:06.826 fused_ordering(975) 00:19:06.826 fused_ordering(976) 00:19:06.826 fused_ordering(977) 00:19:06.826 fused_ordering(978) 00:19:06.826 fused_ordering(979) 00:19:06.826 fused_ordering(980) 00:19:06.826 fused_ordering(981) 00:19:06.826 fused_ordering(982) 00:19:06.826 fused_ordering(983) 00:19:06.826 fused_ordering(984) 00:19:06.826 fused_ordering(985) 00:19:06.826 fused_ordering(986) 00:19:06.826 fused_ordering(987) 00:19:06.826 fused_ordering(988) 00:19:06.826 fused_ordering(989) 00:19:06.826 fused_ordering(990) 00:19:06.826 fused_ordering(991) 00:19:06.826 fused_ordering(992) 00:19:06.826 fused_ordering(993) 00:19:06.826 fused_ordering(994) 00:19:06.826 fused_ordering(995) 00:19:06.826 fused_ordering(996) 00:19:06.826 fused_ordering(997) 00:19:06.826 fused_ordering(998) 00:19:06.826 fused_ordering(999) 00:19:06.826 fused_ordering(1000) 00:19:06.826 fused_ordering(1001) 00:19:06.826 fused_ordering(1002) 00:19:06.826 fused_ordering(1003) 00:19:06.826 fused_ordering(1004) 00:19:06.826 fused_ordering(1005) 00:19:06.826 fused_ordering(1006) 00:19:06.826 fused_ordering(1007) 00:19:06.826 fused_ordering(1008) 00:19:06.826 fused_ordering(1009) 00:19:06.826 fused_ordering(1010) 00:19:06.826 fused_ordering(1011) 00:19:06.827 fused_ordering(1012) 00:19:06.827 fused_ordering(1013) 00:19:06.827 fused_ordering(1014) 00:19:06.827 fused_ordering(1015) 00:19:06.827 fused_ordering(1016) 00:19:06.827 fused_ordering(1017) 00:19:06.827 fused_ordering(1018) 00:19:06.827 fused_ordering(1019) 00:19:06.827 fused_ordering(1020) 00:19:06.827 fused_ordering(1021) 00:19:06.827 fused_ordering(1022) 00:19:06.827 fused_ordering(1023) 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:06.827 rmmod nvme_tcp 00:19:06.827 rmmod nvme_fabrics 00:19:06.827 rmmod nvme_keyring 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 1299029 ']' 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 1299029 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1299029 ']' 00:19:06.827 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1299029 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1299029 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1299029' 00:19:07.086 killing process with pid 1299029 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1299029 00:19:07.086 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1299029 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@254 -- # local dev 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:07.086 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # return 0 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@274 -- # iptr 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-save 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-restore 00:19:09.631 00:19:09.631 real 0m13.567s 00:19:09.631 user 0m7.131s 00:19:09.631 sys 0m7.292s 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:19:09.631 ************************************ 00:19:09.631 END TEST nvmf_fused_ordering 00:19:09.631 ************************************ 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:09.631 ************************************ 00:19:09.631 START TEST nvmf_ns_masking 00:19:09.631 ************************************ 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:19:09.631 * Looking for test storage... 00:19:09.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:09.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.631 --rc genhtml_branch_coverage=1 00:19:09.631 --rc genhtml_function_coverage=1 00:19:09.631 --rc genhtml_legend=1 00:19:09.631 --rc geninfo_all_blocks=1 00:19:09.631 --rc geninfo_unexecuted_blocks=1 00:19:09.631 00:19:09.631 ' 00:19:09.631 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:09.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.631 --rc genhtml_branch_coverage=1 00:19:09.631 --rc genhtml_function_coverage=1 00:19:09.632 --rc genhtml_legend=1 00:19:09.632 --rc geninfo_all_blocks=1 00:19:09.632 --rc geninfo_unexecuted_blocks=1 00:19:09.632 00:19:09.632 ' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:09.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.632 --rc genhtml_branch_coverage=1 00:19:09.632 --rc genhtml_function_coverage=1 00:19:09.632 --rc genhtml_legend=1 00:19:09.632 --rc geninfo_all_blocks=1 00:19:09.632 --rc geninfo_unexecuted_blocks=1 00:19:09.632 00:19:09.632 ' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:09.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:09.632 --rc genhtml_branch_coverage=1 00:19:09.632 --rc genhtml_function_coverage=1 00:19:09.632 --rc genhtml_legend=1 00:19:09.632 --rc geninfo_all_blocks=1 00:19:09.632 --rc geninfo_unexecuted_blocks=1 00:19:09.632 00:19:09.632 ' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:09.632 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=d7f0d6c6-ba53-48b8-aa38-0cd66b3b7ade 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=4e923484-78f1-44c4-be81-8c51a243f595 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=b327a6a0-3200-49d6-8f6b-461312f15d54 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:19:09.632 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:17.771 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:17.771 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:17.771 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:17.771 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@247 -- # create_target_ns 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:17.771 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:17.772 10.0.0.1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:17.772 10.0.0.2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:17.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:17.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.648 ms 00:19:17.772 00:19:17.772 --- 10.0.0.1 ping statistics --- 00:19:17.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.772 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:17.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:17.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:19:17.772 00:19:17.772 --- 10.0.0.2 ping statistics --- 00:19:17.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:17.772 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:17.772 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.773 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:17.773 ' 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=1303952 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 1303952 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1303952 ']' 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:17.773 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:17.773 [2024-12-05 12:02:42.133530] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:17.773 [2024-12-05 12:02:42.133596] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:17.773 [2024-12-05 12:02:42.232443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.773 [2024-12-05 12:02:42.283911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:17.773 [2024-12-05 12:02:42.283962] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:17.773 [2024-12-05 12:02:42.283971] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:17.773 [2024-12-05 12:02:42.283978] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:17.773 [2024-12-05 12:02:42.283985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:17.773 [2024-12-05 12:02:42.284750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:18.035 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:18.295 [2024-12-05 12:02:43.156626] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:18.295 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:19:18.295 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:19:18.295 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:18.555 Malloc1 00:19:18.555 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:18.816 Malloc2 00:19:18.816 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:18.816 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:19:19.077 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.338 [2024-12-05 12:02:44.210378] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.338 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:19:19.338 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b327a6a0-3200-49d6-8f6b-461312f15d54 -a 10.0.0.2 -s 4420 -i 4 00:19:19.598 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:19:19.598 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:19.598 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:19.598 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:19.598 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:21.512 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:21.513 [ 0]:0x1 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:21.513 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a685ca40c82b4e8aa3ff7201a56de169 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a685ca40c82b4e8aa3ff7201a56de169 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:21.773 [ 0]:0x1 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a685ca40c82b4e8aa3ff7201a56de169 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a685ca40c82b4e8aa3ff7201a56de169 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:21.773 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:22.050 [ 1]:0x2 00:19:22.050 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:22.050 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:22.051 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:22.051 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:22.051 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:19:22.051 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:22.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:22.051 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:22.324 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:19:22.324 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:19:22.324 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b327a6a0-3200-49d6-8f6b-461312f15d54 -a 10.0.0.2 -s 4420 -i 4 00:19:22.583 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:19:22.583 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:22.583 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:22.583 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:19:22.583 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:19:22.583 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:24.491 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:24.752 [ 0]:0x2 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:24.752 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:25.013 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:19:25.013 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.013 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:25.013 [ 0]:0x1 00:19:25.013 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:25.013 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a685ca40c82b4e8aa3ff7201a56de169 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a685ca40c82b4e8aa3ff7201a56de169 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.013 [ 1]:0x2 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:25.013 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:25.274 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:25.534 [ 0]:0x2 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:25.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:25.534 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b327a6a0-3200-49d6-8f6b-461312f15d54 -a 10.0.0.2 -s 4420 -i 4 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:25.795 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:19:27.708 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:27.969 [ 0]:0x1 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a685ca40c82b4e8aa3ff7201a56de169 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a685ca40c82b4e8aa3ff7201a56de169 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:27.969 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.229 [ 1]:0x2 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.229 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.490 [ 0]:0x2 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:28.490 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:19:28.752 [2024-12-05 12:02:53.548096] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:19:28.752 request: 00:19:28.752 { 00:19:28.752 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:28.752 "nsid": 2, 00:19:28.752 "host": "nqn.2016-06.io.spdk:host1", 00:19:28.752 "method": "nvmf_ns_remove_host", 00:19:28.752 "req_id": 1 00:19:28.752 } 00:19:28.752 Got JSON-RPC error response 00:19:28.752 response: 00:19:28.752 { 00:19:28.752 "code": -32602, 00:19:28.752 "message": "Invalid parameters" 00:19:28.752 } 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:19:28.752 [ 0]:0x2 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=de9cd71fbc34401a8d977df95f75f330 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ de9cd71fbc34401a8d977df95f75f330 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:28.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1306247 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1306247 /var/tmp/host.sock 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1306247 ']' 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:28.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.752 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:28.752 [2024-12-05 12:02:53.786699] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:28.752 [2024-12-05 12:02:53.786755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1306247 ] 00:19:29.013 [2024-12-05 12:02:53.876250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.013 [2024-12-05 12:02:53.912711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.586 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.586 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:19:29.586 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:29.846 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:30.107 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid d7f0d6c6-ba53-48b8-aa38-0cd66b3b7ade 00:19:30.107 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:30.107 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D7F0D6C6BA5348B8AA380CD66B3B7ADE -i 00:19:30.107 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 4e923484-78f1-44c4-be81-8c51a243f595 00:19:30.107 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:30.107 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 4E92348478F144C4BE818C51A243F595 -i 00:19:30.367 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:19:30.627 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:19:30.627 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:30.627 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:19:31.199 nvme0n1 00:19:31.199 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:31.199 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:19:31.460 nvme1n2 00:19:31.460 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:19:31.460 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:19:31.460 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:31.460 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:19:31.460 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ d7f0d6c6-ba53-48b8-aa38-0cd66b3b7ade == \d\7\f\0\d\6\c\6\-\b\a\5\3\-\4\8\b\8\-\a\a\3\8\-\0\c\d\6\6\b\3\b\7\a\d\e ]] 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:19:31.721 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:19:31.983 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 4e923484-78f1-44c4-be81-8c51a243f595 == \4\e\9\2\3\4\8\4\-\7\8\f\1\-\4\4\c\4\-\b\e\8\1\-\8\c\5\1\a\2\4\3\f\5\9\5 ]] 00:19:31.983 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid d7f0d6c6-ba53-48b8-aa38-0cd66b3b7ade 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D7F0D6C6BA5348B8AA380CD66B3B7ADE 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D7F0D6C6BA5348B8AA380CD66B3B7ADE 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:19:32.244 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g D7F0D6C6BA5348B8AA380CD66B3B7ADE 00:19:32.505 [2024-12-05 12:02:57.398133] bdev.c:8674:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:19:32.505 [2024-12-05 12:02:57.398161] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:19:32.505 [2024-12-05 12:02:57.398168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:32.505 request: 00:19:32.505 { 00:19:32.505 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.505 "namespace": { 00:19:32.505 "bdev_name": "invalid", 00:19:32.505 "nsid": 1, 00:19:32.505 "nguid": "D7F0D6C6BA5348B8AA380CD66B3B7ADE", 00:19:32.505 "no_auto_visible": false, 00:19:32.505 "hide_metadata": false 00:19:32.505 }, 00:19:32.505 "method": "nvmf_subsystem_add_ns", 00:19:32.505 "req_id": 1 00:19:32.505 } 00:19:32.505 Got JSON-RPC error response 00:19:32.505 response: 00:19:32.505 { 00:19:32.505 "code": -32602, 00:19:32.505 "message": "Invalid parameters" 00:19:32.505 } 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid d7f0d6c6-ba53-48b8-aa38-0cd66b3b7ade 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:19:32.505 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g D7F0D6C6BA5348B8AA380CD66B3B7ADE -i 00:19:32.767 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:19:34.680 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:19:34.680 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:19:34.680 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1306247 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1306247 ']' 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1306247 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1306247 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1306247' 00:19:34.941 killing process with pid 1306247 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1306247 00:19:34.941 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1306247 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:35.202 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:35.202 rmmod nvme_tcp 00:19:35.202 rmmod nvme_fabrics 00:19:35.464 rmmod nvme_keyring 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 1303952 ']' 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 1303952 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1303952 ']' 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1303952 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1303952 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1303952' 00:19:35.464 killing process with pid 1303952 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1303952 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1303952 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@254 -- # local dev 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:35.464 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # return 0 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@274 -- # iptr 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-save 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-restore 00:19:38.008 00:19:38.008 real 0m28.346s 00:19:38.008 user 0m32.249s 00:19:38.008 sys 0m8.174s 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:19:38.008 ************************************ 00:19:38.008 END TEST nvmf_ns_masking 00:19:38.008 ************************************ 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:38.008 12:03:02 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:38.008 ************************************ 00:19:38.008 START TEST nvmf_nvme_cli 00:19:38.009 ************************************ 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:19:38.009 * Looking for test storage... 00:19:38.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.009 --rc genhtml_branch_coverage=1 00:19:38.009 --rc genhtml_function_coverage=1 00:19:38.009 --rc genhtml_legend=1 00:19:38.009 --rc geninfo_all_blocks=1 00:19:38.009 --rc geninfo_unexecuted_blocks=1 00:19:38.009 00:19:38.009 ' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.009 --rc genhtml_branch_coverage=1 00:19:38.009 --rc genhtml_function_coverage=1 00:19:38.009 --rc genhtml_legend=1 00:19:38.009 --rc geninfo_all_blocks=1 00:19:38.009 --rc geninfo_unexecuted_blocks=1 00:19:38.009 00:19:38.009 ' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.009 --rc genhtml_branch_coverage=1 00:19:38.009 --rc genhtml_function_coverage=1 00:19:38.009 --rc genhtml_legend=1 00:19:38.009 --rc geninfo_all_blocks=1 00:19:38.009 --rc geninfo_unexecuted_blocks=1 00:19:38.009 00:19:38.009 ' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:38.009 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:38.009 --rc genhtml_branch_coverage=1 00:19:38.009 --rc genhtml_function_coverage=1 00:19:38.009 --rc genhtml_legend=1 00:19:38.009 --rc geninfo_all_blocks=1 00:19:38.009 --rc geninfo_unexecuted_blocks=1 00:19:38.009 00:19:38.009 ' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:38.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:38.009 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:19:38.010 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:19:46.213 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:19:46.213 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:19:46.213 Found net devices under 0000:4b:00.0: cvl_0_0 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:19:46.213 Found net devices under 0000:4b:00.1: cvl_0_1 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@247 -- # create_target_ns 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:19:46.213 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:46.214 10.0.0.1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:46.214 10.0.0.2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:46.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.625 ms 00:19:46.214 00:19:46.214 --- 10.0.0.1 ping statistics --- 00:19:46.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.214 rtt min/avg/max/mdev = 0.625/0.625/0.625/0.000 ms 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:19:46.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:19:46.214 00:19:46.214 --- 10.0.0.2 ping statistics --- 00:19:46.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.214 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair++ )) 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:19:46.214 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:19:46.215 ' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=1312450 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 1312450 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1312450 ']' 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.215 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.215 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.215 [2024-12-05 12:03:10.637720] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:46.215 [2024-12-05 12:03:10.637791] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.215 [2024-12-05 12:03:10.739504] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.215 [2024-12-05 12:03:10.793480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.215 [2024-12-05 12:03:10.793533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.215 [2024-12-05 12:03:10.793542] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.215 [2024-12-05 12:03:10.793549] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.215 [2024-12-05 12:03:10.793556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.215 [2024-12-05 12:03:10.795977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.215 [2024-12-05 12:03:10.796137] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.215 [2024-12-05 12:03:10.796297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.215 [2024-12-05 12:03:10.796297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.477 [2024-12-05 12:03:11.501324] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.477 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 Malloc0 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 Malloc1 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 [2024-12-05 12:03:11.617781] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:19:46.738 00:19:46.738 Discovery Log Number of Records 2, Generation counter 2 00:19:46.738 =====Discovery Log Entry 0====== 00:19:46.738 trtype: tcp 00:19:46.738 adrfam: ipv4 00:19:46.738 subtype: current discovery subsystem 00:19:46.738 treq: not required 00:19:46.738 portid: 0 00:19:46.738 trsvcid: 4420 00:19:46.738 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:46.738 traddr: 10.0.0.2 00:19:46.738 eflags: explicit discovery connections, duplicate discovery information 00:19:46.738 sectype: none 00:19:46.738 =====Discovery Log Entry 1====== 00:19:46.738 trtype: tcp 00:19:46.738 adrfam: ipv4 00:19:46.738 subtype: nvme subsystem 00:19:46.738 treq: not required 00:19:46.738 portid: 0 00:19:46.738 trsvcid: 4420 00:19:46.738 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:46.738 traddr: 10.0.0.2 00:19:46.738 eflags: none 00:19:46.738 sectype: none 00:19:46.738 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:19:46.999 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:48.384 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:19:48.384 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:19:48.384 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:48.384 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:19:48.384 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:19:48.384 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:19:50.928 /dev/nvme0n2 ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:19:50.928 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:50.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:50.929 rmmod nvme_tcp 00:19:50.929 rmmod nvme_fabrics 00:19:50.929 rmmod nvme_keyring 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 1312450 ']' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 1312450 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1312450 ']' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1312450 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1312450 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1312450' 00:19:50.929 killing process with pid 1312450 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1312450 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1312450 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@254 -- # local dev 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # remove_target_ns 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:50.929 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # delete_main_bridge 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # return 0 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@274 -- # iptr 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-save 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-restore 00:19:53.475 00:19:53.475 real 0m15.290s 00:19:53.475 user 0m22.427s 00:19:53.475 sys 0m6.518s 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:19:53.475 ************************************ 00:19:53.475 END TEST nvmf_nvme_cli 00:19:53.475 ************************************ 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:53.475 12:03:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:53.475 ************************************ 00:19:53.475 START TEST nvmf_vfio_user 00:19:53.475 ************************************ 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:19:53.475 * Looking for test storage... 00:19:53.475 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.475 --rc genhtml_branch_coverage=1 00:19:53.475 --rc genhtml_function_coverage=1 00:19:53.475 --rc genhtml_legend=1 00:19:53.475 --rc geninfo_all_blocks=1 00:19:53.475 --rc geninfo_unexecuted_blocks=1 00:19:53.475 00:19:53.475 ' 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.475 --rc genhtml_branch_coverage=1 00:19:53.475 --rc genhtml_function_coverage=1 00:19:53.475 --rc genhtml_legend=1 00:19:53.475 --rc geninfo_all_blocks=1 00:19:53.475 --rc geninfo_unexecuted_blocks=1 00:19:53.475 00:19:53.475 ' 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:53.475 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.475 --rc genhtml_branch_coverage=1 00:19:53.475 --rc genhtml_function_coverage=1 00:19:53.475 --rc genhtml_legend=1 00:19:53.475 --rc geninfo_all_blocks=1 00:19:53.475 --rc geninfo_unexecuted_blocks=1 00:19:53.475 00:19:53.475 ' 00:19:53.475 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:53.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:53.476 --rc genhtml_branch_coverage=1 00:19:53.476 --rc genhtml_function_coverage=1 00:19:53.476 --rc genhtml_legend=1 00:19:53.476 --rc geninfo_all_blocks=1 00:19:53.476 --rc geninfo_unexecuted_blocks=1 00:19:53.476 00:19:53.476 ' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:53.476 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1314044 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1314044' 00:19:53.476 Process pid: 1314044 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1314044 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1314044 ']' 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:53.476 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:53.476 [2024-12-05 12:03:18.309208] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:53.476 [2024-12-05 12:03:18.309277] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.476 [2024-12-05 12:03:18.399072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:53.476 [2024-12-05 12:03:18.433392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.476 [2024-12-05 12:03:18.433421] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.476 [2024-12-05 12:03:18.433426] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.476 [2024-12-05 12:03:18.433431] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.476 [2024-12-05 12:03:18.433435] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.476 [2024-12-05 12:03:18.434977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:53.476 [2024-12-05 12:03:18.435127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:53.476 [2024-12-05 12:03:18.435282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:53.476 [2024-12-05 12:03:18.435284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:54.418 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.418 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:54.418 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:55.360 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:19:55.360 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:55.360 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:55.360 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:55.360 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:55.360 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:55.620 Malloc1 00:19:55.620 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:55.882 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:55.882 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:56.144 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:56.144 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:56.144 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:56.406 Malloc2 00:19:56.406 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:56.406 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:56.667 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:56.930 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:19:56.930 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:19:56.930 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:56.930 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:19:56.930 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:19:56.930 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:56.930 [2024-12-05 12:03:21.811015] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:19:56.930 [2024-12-05 12:03:21.811057] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1314735 ] 00:19:56.930 [2024-12-05 12:03:21.850741] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:19:56.930 [2024-12-05 12:03:21.856043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:56.930 [2024-12-05 12:03:21.856060] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1dbdbd1000 00:19:56.930 [2024-12-05 12:03:21.857042] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.858040] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.859043] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.860050] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.861049] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.862064] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.863069] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.864077] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:56.930 [2024-12-05 12:03:21.865079] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:56.930 [2024-12-05 12:03:21.865087] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1dbdbc6000 00:19:56.930 [2024-12-05 12:03:21.866000] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:56.930 [2024-12-05 12:03:21.875457] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:19:56.930 [2024-12-05 12:03:21.875481] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:19:56.930 [2024-12-05 12:03:21.881165] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:56.930 [2024-12-05 12:03:21.881198] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:56.930 [2024-12-05 12:03:21.881264] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:19:56.930 [2024-12-05 12:03:21.881278] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:19:56.930 [2024-12-05 12:03:21.881282] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:19:56.930 [2024-12-05 12:03:21.882165] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:19:56.930 [2024-12-05 12:03:21.882173] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:19:56.930 [2024-12-05 12:03:21.882178] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:19:56.930 [2024-12-05 12:03:21.883170] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:19:56.930 [2024-12-05 12:03:21.883176] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:19:56.930 [2024-12-05 12:03:21.883182] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:56.930 [2024-12-05 12:03:21.884175] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:19:56.930 [2024-12-05 12:03:21.884181] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:56.930 [2024-12-05 12:03:21.885178] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:19:56.930 [2024-12-05 12:03:21.885184] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:56.930 [2024-12-05 12:03:21.885188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:56.930 [2024-12-05 12:03:21.885193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:56.930 [2024-12-05 12:03:21.885299] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:19:56.930 [2024-12-05 12:03:21.885302] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:56.930 [2024-12-05 12:03:21.885306] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:19:56.930 [2024-12-05 12:03:21.886196] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:19:56.930 [2024-12-05 12:03:21.887193] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:19:56.930 [2024-12-05 12:03:21.888201] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:56.930 [2024-12-05 12:03:21.889200] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:56.930 [2024-12-05 12:03:21.889250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:56.930 [2024-12-05 12:03:21.890210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:19:56.930 [2024-12-05 12:03:21.890216] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:56.930 [2024-12-05 12:03:21.890220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:56.930 [2024-12-05 12:03:21.890234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:19:56.931 [2024-12-05 12:03:21.890239] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890253] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:56.931 [2024-12-05 12:03:21.890257] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:56.931 [2024-12-05 12:03:21.890260] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.931 [2024-12-05 12:03:21.890271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890317] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:19:56.931 [2024-12-05 12:03:21.890321] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:19:56.931 [2024-12-05 12:03:21.890324] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:19:56.931 [2024-12-05 12:03:21.890328] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:56.931 [2024-12-05 12:03:21.890331] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:19:56.931 [2024-12-05 12:03:21.890335] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:19:56.931 [2024-12-05 12:03:21.890338] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.931 [2024-12-05 12:03:21.890379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.931 [2024-12-05 12:03:21.890385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.931 [2024-12-05 12:03:21.890391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:56.931 [2024-12-05 12:03:21.890396] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890422] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:19:56.931 [2024-12-05 12:03:21.890425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890433] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890444] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890503] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890509] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:56.931 [2024-12-05 12:03:21.890512] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:56.931 [2024-12-05 12:03:21.890514] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.931 [2024-12-05 12:03:21.890519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890542] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:19:56.931 [2024-12-05 12:03:21.890551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890557] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890562] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:56.931 [2024-12-05 12:03:21.890565] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:56.931 [2024-12-05 12:03:21.890567] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.931 [2024-12-05 12:03:21.890572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890596] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890608] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:56.931 [2024-12-05 12:03:21.890611] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:56.931 [2024-12-05 12:03:21.890613] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.931 [2024-12-05 12:03:21.890617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890635] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890640] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890650] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890664] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:56.931 [2024-12-05 12:03:21.890668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:19:56.931 [2024-12-05 12:03:21.890672] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:19:56.931 [2024-12-05 12:03:21.890688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890707] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890721] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890741] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:56.931 [2024-12-05 12:03:21.890761] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:56.931 [2024-12-05 12:03:21.890767] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:56.931 [2024-12-05 12:03:21.890772] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:56.931 [2024-12-05 12:03:21.890775] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:56.931 [2024-12-05 12:03:21.890777] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:56.931 [2024-12-05 12:03:21.890784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:56.931 [2024-12-05 12:03:21.890792] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:56.931 [2024-12-05 12:03:21.890796] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:56.931 [2024-12-05 12:03:21.890801] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.931 [2024-12-05 12:03:21.890806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:56.931 [2024-12-05 12:03:21.890813] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:56.932 [2024-12-05 12:03:21.890818] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:56.932 [2024-12-05 12:03:21.890823] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.932 [2024-12-05 12:03:21.890828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:56.932 [2024-12-05 12:03:21.890836] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:56.932 [2024-12-05 12:03:21.890839] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:56.932 [2024-12-05 12:03:21.890845] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:56.932 [2024-12-05 12:03:21.890851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:56.932 [2024-12-05 12:03:21.890858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:56.932 [2024-12-05 12:03:21.890867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:56.932 [2024-12-05 12:03:21.890877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:56.932 [2024-12-05 12:03:21.890883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:56.932 ===================================================== 00:19:56.932 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:19:56.932 ===================================================== 00:19:56.932 Controller Capabilities/Features 00:19:56.932 ================================ 00:19:56.932 Vendor ID: 4e58 00:19:56.932 Subsystem Vendor ID: 4e58 00:19:56.932 Serial Number: SPDK1 00:19:56.932 Model Number: SPDK bdev Controller 00:19:56.932 Firmware Version: 25.01 00:19:56.932 Recommended Arb Burst: 6 00:19:56.932 IEEE OUI Identifier: 8d 6b 50 00:19:56.932 Multi-path I/O 00:19:56.932 May have multiple subsystem ports: Yes 00:19:56.932 May have multiple controllers: Yes 00:19:56.932 Associated with SR-IOV VF: No 00:19:56.932 Max Data Transfer Size: 131072 00:19:56.932 Max Number of Namespaces: 32 00:19:56.932 Max Number of I/O Queues: 127 00:19:56.932 NVMe Specification Version (VS): 1.3 00:19:56.932 NVMe Specification Version (Identify): 1.3 00:19:56.932 Maximum Queue Entries: 256 00:19:56.932 Contiguous Queues Required: Yes 00:19:56.932 Arbitration Mechanisms Supported 00:19:56.932 Weighted Round Robin: Not Supported 00:19:56.932 Vendor Specific: Not Supported 00:19:56.932 Reset Timeout: 15000 ms 00:19:56.932 Doorbell Stride: 4 bytes 00:19:56.932 NVM Subsystem Reset: Not Supported 00:19:56.932 Command Sets Supported 00:19:56.932 NVM Command Set: Supported 00:19:56.932 Boot Partition: Not Supported 00:19:56.932 Memory Page Size Minimum: 4096 bytes 00:19:56.932 Memory Page Size Maximum: 4096 bytes 00:19:56.932 Persistent Memory Region: Not Supported 00:19:56.932 Optional Asynchronous Events Supported 00:19:56.932 Namespace Attribute Notices: Supported 00:19:56.932 Firmware Activation Notices: Not Supported 00:19:56.932 ANA Change Notices: Not Supported 00:19:56.932 PLE Aggregate Log Change Notices: Not Supported 00:19:56.932 LBA Status Info Alert Notices: Not Supported 00:19:56.932 EGE Aggregate Log Change Notices: Not Supported 00:19:56.932 Normal NVM Subsystem Shutdown event: Not Supported 00:19:56.932 Zone Descriptor Change Notices: Not Supported 00:19:56.932 Discovery Log Change Notices: Not Supported 00:19:56.932 Controller Attributes 00:19:56.932 128-bit Host Identifier: Supported 00:19:56.932 Non-Operational Permissive Mode: Not Supported 00:19:56.932 NVM Sets: Not Supported 00:19:56.932 Read Recovery Levels: Not Supported 00:19:56.932 Endurance Groups: Not Supported 00:19:56.932 Predictable Latency Mode: Not Supported 00:19:56.932 Traffic Based Keep ALive: Not Supported 00:19:56.932 Namespace Granularity: Not Supported 00:19:56.932 SQ Associations: Not Supported 00:19:56.932 UUID List: Not Supported 00:19:56.932 Multi-Domain Subsystem: Not Supported 00:19:56.932 Fixed Capacity Management: Not Supported 00:19:56.932 Variable Capacity Management: Not Supported 00:19:56.932 Delete Endurance Group: Not Supported 00:19:56.932 Delete NVM Set: Not Supported 00:19:56.932 Extended LBA Formats Supported: Not Supported 00:19:56.932 Flexible Data Placement Supported: Not Supported 00:19:56.932 00:19:56.932 Controller Memory Buffer Support 00:19:56.932 ================================ 00:19:56.932 Supported: No 00:19:56.932 00:19:56.932 Persistent Memory Region Support 00:19:56.932 ================================ 00:19:56.932 Supported: No 00:19:56.932 00:19:56.932 Admin Command Set Attributes 00:19:56.932 ============================ 00:19:56.932 Security Send/Receive: Not Supported 00:19:56.932 Format NVM: Not Supported 00:19:56.932 Firmware Activate/Download: Not Supported 00:19:56.932 Namespace Management: Not Supported 00:19:56.932 Device Self-Test: Not Supported 00:19:56.932 Directives: Not Supported 00:19:56.932 NVMe-MI: Not Supported 00:19:56.932 Virtualization Management: Not Supported 00:19:56.932 Doorbell Buffer Config: Not Supported 00:19:56.932 Get LBA Status Capability: Not Supported 00:19:56.932 Command & Feature Lockdown Capability: Not Supported 00:19:56.932 Abort Command Limit: 4 00:19:56.932 Async Event Request Limit: 4 00:19:56.932 Number of Firmware Slots: N/A 00:19:56.932 Firmware Slot 1 Read-Only: N/A 00:19:56.932 Firmware Activation Without Reset: N/A 00:19:56.932 Multiple Update Detection Support: N/A 00:19:56.932 Firmware Update Granularity: No Information Provided 00:19:56.932 Per-Namespace SMART Log: No 00:19:56.932 Asymmetric Namespace Access Log Page: Not Supported 00:19:56.932 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:19:56.932 Command Effects Log Page: Supported 00:19:56.932 Get Log Page Extended Data: Supported 00:19:56.932 Telemetry Log Pages: Not Supported 00:19:56.932 Persistent Event Log Pages: Not Supported 00:19:56.932 Supported Log Pages Log Page: May Support 00:19:56.932 Commands Supported & Effects Log Page: Not Supported 00:19:56.932 Feature Identifiers & Effects Log Page:May Support 00:19:56.932 NVMe-MI Commands & Effects Log Page: May Support 00:19:56.932 Data Area 4 for Telemetry Log: Not Supported 00:19:56.932 Error Log Page Entries Supported: 128 00:19:56.932 Keep Alive: Supported 00:19:56.932 Keep Alive Granularity: 10000 ms 00:19:56.932 00:19:56.932 NVM Command Set Attributes 00:19:56.932 ========================== 00:19:56.932 Submission Queue Entry Size 00:19:56.932 Max: 64 00:19:56.932 Min: 64 00:19:56.932 Completion Queue Entry Size 00:19:56.932 Max: 16 00:19:56.932 Min: 16 00:19:56.932 Number of Namespaces: 32 00:19:56.932 Compare Command: Supported 00:19:56.932 Write Uncorrectable Command: Not Supported 00:19:56.932 Dataset Management Command: Supported 00:19:56.932 Write Zeroes Command: Supported 00:19:56.932 Set Features Save Field: Not Supported 00:19:56.932 Reservations: Not Supported 00:19:56.932 Timestamp: Not Supported 00:19:56.932 Copy: Supported 00:19:56.932 Volatile Write Cache: Present 00:19:56.932 Atomic Write Unit (Normal): 1 00:19:56.932 Atomic Write Unit (PFail): 1 00:19:56.932 Atomic Compare & Write Unit: 1 00:19:56.932 Fused Compare & Write: Supported 00:19:56.932 Scatter-Gather List 00:19:56.932 SGL Command Set: Supported (Dword aligned) 00:19:56.932 SGL Keyed: Not Supported 00:19:56.932 SGL Bit Bucket Descriptor: Not Supported 00:19:56.932 SGL Metadata Pointer: Not Supported 00:19:56.932 Oversized SGL: Not Supported 00:19:56.932 SGL Metadata Address: Not Supported 00:19:56.932 SGL Offset: Not Supported 00:19:56.932 Transport SGL Data Block: Not Supported 00:19:56.932 Replay Protected Memory Block: Not Supported 00:19:56.932 00:19:56.932 Firmware Slot Information 00:19:56.932 ========================= 00:19:56.932 Active slot: 1 00:19:56.932 Slot 1 Firmware Revision: 25.01 00:19:56.932 00:19:56.932 00:19:56.932 Commands Supported and Effects 00:19:56.932 ============================== 00:19:56.932 Admin Commands 00:19:56.932 -------------- 00:19:56.932 Get Log Page (02h): Supported 00:19:56.932 Identify (06h): Supported 00:19:56.932 Abort (08h): Supported 00:19:56.932 Set Features (09h): Supported 00:19:56.932 Get Features (0Ah): Supported 00:19:56.932 Asynchronous Event Request (0Ch): Supported 00:19:56.932 Keep Alive (18h): Supported 00:19:56.932 I/O Commands 00:19:56.932 ------------ 00:19:56.932 Flush (00h): Supported LBA-Change 00:19:56.932 Write (01h): Supported LBA-Change 00:19:56.932 Read (02h): Supported 00:19:56.932 Compare (05h): Supported 00:19:56.932 Write Zeroes (08h): Supported LBA-Change 00:19:56.932 Dataset Management (09h): Supported LBA-Change 00:19:56.932 Copy (19h): Supported LBA-Change 00:19:56.932 00:19:56.932 Error Log 00:19:56.932 ========= 00:19:56.932 00:19:56.932 Arbitration 00:19:56.932 =========== 00:19:56.932 Arbitration Burst: 1 00:19:56.932 00:19:56.932 Power Management 00:19:56.932 ================ 00:19:56.933 Number of Power States: 1 00:19:56.933 Current Power State: Power State #0 00:19:56.933 Power State #0: 00:19:56.933 Max Power: 0.00 W 00:19:56.933 Non-Operational State: Operational 00:19:56.933 Entry Latency: Not Reported 00:19:56.933 Exit Latency: Not Reported 00:19:56.933 Relative Read Throughput: 0 00:19:56.933 Relative Read Latency: 0 00:19:56.933 Relative Write Throughput: 0 00:19:56.933 Relative Write Latency: 0 00:19:56.933 Idle Power: Not Reported 00:19:56.933 Active Power: Not Reported 00:19:56.933 Non-Operational Permissive Mode: Not Supported 00:19:56.933 00:19:56.933 Health Information 00:19:56.933 ================== 00:19:56.933 Critical Warnings: 00:19:56.933 Available Spare Space: OK 00:19:56.933 Temperature: OK 00:19:56.933 Device Reliability: OK 00:19:56.933 Read Only: No 00:19:56.933 Volatile Memory Backup: OK 00:19:56.933 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:56.933 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:56.933 Available Spare: 0% 00:19:56.933 Available Sp[2024-12-05 12:03:21.890959] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:56.933 [2024-12-05 12:03:21.890967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:56.933 [2024-12-05 12:03:21.890990] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:19:56.933 [2024-12-05 12:03:21.890999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-12-05 12:03:21.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-12-05 12:03:21.891011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-12-05 12:03:21.891018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:56.933 [2024-12-05 12:03:21.891221] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:19:56.933 [2024-12-05 12:03:21.891230] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:19:56.933 [2024-12-05 12:03:21.892224] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:56.933 [2024-12-05 12:03:21.892267] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:19:56.933 [2024-12-05 12:03:21.892273] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:19:56.933 [2024-12-05 12:03:21.893233] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:19:56.933 [2024-12-05 12:03:21.893243] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:19:56.933 [2024-12-05 12:03:21.893294] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:19:56.933 [2024-12-05 12:03:21.895460] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:56.933 are Threshold: 0% 00:19:56.933 Life Percentage Used: 0% 00:19:56.933 Data Units Read: 0 00:19:56.933 Data Units Written: 0 00:19:56.933 Host Read Commands: 0 00:19:56.933 Host Write Commands: 0 00:19:56.933 Controller Busy Time: 0 minutes 00:19:56.933 Power Cycles: 0 00:19:56.933 Power On Hours: 0 hours 00:19:56.933 Unsafe Shutdowns: 0 00:19:56.933 Unrecoverable Media Errors: 0 00:19:56.933 Lifetime Error Log Entries: 0 00:19:56.933 Warning Temperature Time: 0 minutes 00:19:56.933 Critical Temperature Time: 0 minutes 00:19:56.933 00:19:56.933 Number of Queues 00:19:56.933 ================ 00:19:56.933 Number of I/O Submission Queues: 127 00:19:56.933 Number of I/O Completion Queues: 127 00:19:56.933 00:19:56.933 Active Namespaces 00:19:56.933 ================= 00:19:56.933 Namespace ID:1 00:19:56.933 Error Recovery Timeout: Unlimited 00:19:56.933 Command Set Identifier: NVM (00h) 00:19:56.933 Deallocate: Supported 00:19:56.933 Deallocated/Unwritten Error: Not Supported 00:19:56.933 Deallocated Read Value: Unknown 00:19:56.933 Deallocate in Write Zeroes: Not Supported 00:19:56.933 Deallocated Guard Field: 0xFFFF 00:19:56.933 Flush: Supported 00:19:56.933 Reservation: Supported 00:19:56.933 Namespace Sharing Capabilities: Multiple Controllers 00:19:56.933 Size (in LBAs): 131072 (0GiB) 00:19:56.933 Capacity (in LBAs): 131072 (0GiB) 00:19:56.933 Utilization (in LBAs): 131072 (0GiB) 00:19:56.933 NGUID: 08178C740C3642CBABD65D67C4071DFF 00:19:56.933 UUID: 08178c74-0c36-42cb-abd6-5d67c4071dff 00:19:56.933 Thin Provisioning: Not Supported 00:19:56.933 Per-NS Atomic Units: Yes 00:19:56.933 Atomic Boundary Size (Normal): 0 00:19:56.933 Atomic Boundary Size (PFail): 0 00:19:56.933 Atomic Boundary Offset: 0 00:19:56.933 Maximum Single Source Range Length: 65535 00:19:56.933 Maximum Copy Length: 65535 00:19:56.933 Maximum Source Range Count: 1 00:19:56.933 NGUID/EUI64 Never Reused: No 00:19:56.933 Namespace Write Protected: No 00:19:56.933 Number of LBA Formats: 1 00:19:56.933 Current LBA Format: LBA Format #00 00:19:56.933 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:56.933 00:19:56.933 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:57.195 [2024-12-05 12:03:22.062081] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:02.476 Initializing NVMe Controllers 00:20:02.476 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:02.476 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:02.476 Initialization complete. Launching workers. 00:20:02.476 ======================================================== 00:20:02.476 Latency(us) 00:20:02.476 Device Information : IOPS MiB/s Average min max 00:20:02.476 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39964.45 156.11 3202.72 864.84 6818.73 00:20:02.476 ======================================================== 00:20:02.476 Total : 39964.45 156.11 3202.72 864.84 6818.73 00:20:02.476 00:20:02.476 [2024-12-05 12:03:27.083681] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:02.476 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:02.476 [2024-12-05 12:03:27.275516] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:07.760 Initializing NVMe Controllers 00:20:07.760 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:07.760 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:20:07.760 Initialization complete. Launching workers. 00:20:07.760 ======================================================== 00:20:07.760 Latency(us) 00:20:07.760 Device Information : IOPS MiB/s Average min max 00:20:07.760 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.38 62.70 7974.36 6320.53 8971.75 00:20:07.760 ======================================================== 00:20:07.760 Total : 16050.38 62.70 7974.36 6320.53 8971.75 00:20:07.760 00:20:07.760 [2024-12-05 12:03:32.307444] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:07.760 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:07.760 [2024-12-05 12:03:32.507305] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:13.043 [2024-12-05 12:03:37.581710] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:13.043 Initializing NVMe Controllers 00:20:13.043 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:13.043 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:20:13.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:20:13.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:20:13.043 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:20:13.043 Initialization complete. Launching workers. 00:20:13.043 Starting thread on core 2 00:20:13.043 Starting thread on core 3 00:20:13.043 Starting thread on core 1 00:20:13.043 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:20:13.043 [2024-12-05 12:03:37.829479] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:16.341 [2024-12-05 12:03:40.891842] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:16.341 Initializing NVMe Controllers 00:20:16.341 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:16.341 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:16.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:20:16.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:20:16.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:20:16.341 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:20:16.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:16.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:16.341 Initialization complete. Launching workers. 00:20:16.341 Starting thread on core 1 with urgent priority queue 00:20:16.341 Starting thread on core 2 with urgent priority queue 00:20:16.341 Starting thread on core 3 with urgent priority queue 00:20:16.341 Starting thread on core 0 with urgent priority queue 00:20:16.341 SPDK bdev Controller (SPDK1 ) core 0: 9123.00 IO/s 10.96 secs/100000 ios 00:20:16.341 SPDK bdev Controller (SPDK1 ) core 1: 13085.00 IO/s 7.64 secs/100000 ios 00:20:16.341 SPDK bdev Controller (SPDK1 ) core 2: 9052.00 IO/s 11.05 secs/100000 ios 00:20:16.341 SPDK bdev Controller (SPDK1 ) core 3: 12246.67 IO/s 8.17 secs/100000 ios 00:20:16.341 ======================================================== 00:20:16.341 00:20:16.341 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:16.341 [2024-12-05 12:03:41.126735] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:16.341 Initializing NVMe Controllers 00:20:16.341 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:16.341 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:16.341 Namespace ID: 1 size: 0GB 00:20:16.341 Initialization complete. 00:20:16.341 INFO: using host memory buffer for IO 00:20:16.341 Hello world! 00:20:16.341 [2024-12-05 12:03:41.160937] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:16.341 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:20:16.603 [2024-12-05 12:03:41.395868] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:17.568 Initializing NVMe Controllers 00:20:17.568 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:17.568 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:17.568 Initialization complete. Launching workers. 00:20:17.568 submit (in ns) avg, min, max = 5355.0, 2832.5, 3997482.5 00:20:17.568 complete (in ns) avg, min, max = 16795.2, 1640.0, 3997630.0 00:20:17.568 00:20:17.568 Submit histogram 00:20:17.568 ================ 00:20:17.568 Range in us Cumulative Count 00:20:17.568 2.827 - 2.840: 0.1636% ( 33) 00:20:17.568 2.840 - 2.853: 1.1554% ( 200) 00:20:17.568 2.853 - 2.867: 3.4363% ( 460) 00:20:17.568 2.867 - 2.880: 7.4081% ( 801) 00:20:17.568 2.880 - 2.893: 13.4626% ( 1221) 00:20:17.568 2.893 - 2.907: 19.9782% ( 1314) 00:20:17.568 2.907 - 2.920: 26.4392% ( 1303) 00:20:17.568 2.920 - 2.933: 31.2441% ( 969) 00:20:17.568 2.933 - 2.947: 36.6044% ( 1081) 00:20:17.568 2.947 - 2.960: 42.5299% ( 1195) 00:20:17.568 2.960 - 2.973: 47.9843% ( 1100) 00:20:17.568 2.973 - 2.987: 54.5148% ( 1317) 00:20:17.568 2.987 - 3.000: 61.9081% ( 1491) 00:20:17.568 3.000 - 3.013: 70.4864% ( 1730) 00:20:17.568 3.013 - 3.027: 78.5937% ( 1635) 00:20:17.568 3.027 - 3.040: 85.6548% ( 1424) 00:20:17.568 3.040 - 3.053: 91.0249% ( 1083) 00:20:17.568 3.053 - 3.067: 95.4331% ( 889) 00:20:17.568 3.067 - 3.080: 97.3273% ( 382) 00:20:17.568 3.080 - 3.093: 98.4083% ( 218) 00:20:17.568 3.093 - 3.107: 98.9785% ( 115) 00:20:17.568 3.107 - 3.120: 99.3058% ( 66) 00:20:17.568 3.120 - 3.133: 99.4893% ( 37) 00:20:17.568 3.133 - 3.147: 99.5537% ( 13) 00:20:17.568 3.147 - 3.160: 99.6033% ( 10) 00:20:17.568 3.160 - 3.173: 99.6132% ( 2) 00:20:17.568 3.187 - 3.200: 99.6182% ( 1) 00:20:17.568 3.213 - 3.227: 99.6231% ( 1) 00:20:17.568 3.440 - 3.467: 99.6281% ( 1) 00:20:17.568 3.707 - 3.733: 99.6331% ( 1) 00:20:17.568 3.893 - 3.920: 99.6380% ( 1) 00:20:17.568 4.027 - 4.053: 99.6430% ( 1) 00:20:17.568 4.133 - 4.160: 99.6479% ( 1) 00:20:17.568 4.320 - 4.347: 99.6529% ( 1) 00:20:17.568 4.480 - 4.507: 99.6579% ( 1) 00:20:17.568 4.533 - 4.560: 99.6628% ( 1) 00:20:17.568 4.560 - 4.587: 99.6777% ( 3) 00:20:17.568 4.613 - 4.640: 99.6826% ( 1) 00:20:17.568 4.640 - 4.667: 99.6926% ( 2) 00:20:17.568 4.667 - 4.693: 99.6975% ( 1) 00:20:17.568 4.693 - 4.720: 99.7025% ( 1) 00:20:17.568 4.773 - 4.800: 99.7074% ( 1) 00:20:17.568 4.800 - 4.827: 99.7223% ( 3) 00:20:17.568 4.853 - 4.880: 99.7322% ( 2) 00:20:17.568 4.907 - 4.933: 99.7372% ( 1) 00:20:17.568 4.933 - 4.960: 99.7422% ( 1) 00:20:17.568 4.960 - 4.987: 99.7521% ( 2) 00:20:17.568 5.013 - 5.040: 99.7570% ( 1) 00:20:17.568 5.040 - 5.067: 99.7669% ( 2) 00:20:17.568 5.067 - 5.093: 99.7769% ( 2) 00:20:17.568 5.093 - 5.120: 99.7818% ( 1) 00:20:17.568 5.147 - 5.173: 99.7967% ( 3) 00:20:17.568 5.173 - 5.200: 99.8116% ( 3) 00:20:17.568 5.200 - 5.227: 99.8165% ( 1) 00:20:17.568 5.253 - 5.280: 99.8215% ( 1) 00:20:17.568 5.307 - 5.333: 99.8264% ( 1) 00:20:17.568 5.467 - 5.493: 99.8314% ( 1) 00:20:17.568 5.520 - 5.547: 99.8364% ( 1) 00:20:17.568 5.547 - 5.573: 99.8413% ( 1) 00:20:17.568 5.600 - 5.627: 99.8463% ( 1) 00:20:17.568 5.680 - 5.707: 99.8512% ( 1) 00:20:17.568 5.707 - 5.733: 99.8562% ( 1) 00:20:17.568 5.787 - 5.813: 99.8661% ( 2) 00:20:17.568 5.947 - 5.973: 99.8711% ( 1) 00:20:17.568 6.027 - 6.053: 99.8760% ( 1) 00:20:17.568 6.107 - 6.133: 99.8810% ( 1) 00:20:17.568 6.213 - 6.240: 99.8860% ( 1) 00:20:17.568 6.240 - 6.267: 99.8909% ( 1) 00:20:17.568 6.267 - 6.293: 99.8959% ( 1) 00:20:17.568 6.453 - 6.480: 99.9008% ( 1) 00:20:17.568 6.560 - 6.587: 99.9058% ( 1) 00:20:17.568 6.827 - 6.880: 99.9107% ( 1) 00:20:17.568 6.880 - 6.933: 99.9157% ( 1) 00:20:17.568 6.933 - 6.987: 99.9256% ( 2) 00:20:17.568 9.440 - 9.493: 99.9306% ( 1) 00:20:17.568 10.347 - 10.400: 99.9355% ( 1) 00:20:17.568 12.480 - 12.533: 99.9405% ( 1) 00:20:17.568 3986.773 - 4014.080: 100.0000% ( 12) 00:20:17.568 00:20:17.568 Complete histogram 00:20:17.568 ================== 00:20:17.568 Ra[2024-12-05 12:03:42.415483] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:17.568 nge in us Cumulative Count 00:20:17.568 1.640 - 1.647: 0.0050% ( 1) 00:20:17.568 1.647 - 1.653: 0.1537% ( 30) 00:20:17.568 1.653 - 1.660: 1.0463% ( 180) 00:20:17.568 1.660 - 1.667: 1.1950% ( 30) 00:20:17.568 1.667 - 1.673: 1.3239% ( 26) 00:20:17.568 1.673 - 1.680: 1.4380% ( 23) 00:20:17.568 1.680 - 1.687: 1.5025% ( 13) 00:20:17.568 1.687 - 1.693: 1.5124% ( 2) 00:20:17.568 1.693 - 1.700: 1.5173% ( 1) 00:20:17.568 1.707 - 1.720: 51.1727% ( 10014) 00:20:17.568 1.720 - 1.733: 69.4749% ( 3691) 00:20:17.568 1.733 - 1.747: 81.9656% ( 2519) 00:20:17.568 1.747 - 1.760: 84.3011% ( 471) 00:20:17.568 1.760 - 1.773: 85.3325% ( 208) 00:20:17.568 1.773 - 1.787: 90.7572% ( 1094) 00:20:17.568 1.787 - 1.800: 96.2216% ( 1102) 00:20:17.568 1.800 - 1.813: 98.4033% ( 440) 00:20:17.568 1.813 - 1.827: 99.1868% ( 158) 00:20:17.568 1.827 - 1.840: 99.4198% ( 47) 00:20:17.568 1.840 - 1.853: 99.4496% ( 6) 00:20:17.569 1.853 - 1.867: 99.4595% ( 2) 00:20:17.569 3.347 - 3.360: 99.4645% ( 1) 00:20:17.569 3.373 - 3.387: 99.4694% ( 1) 00:20:17.569 3.387 - 3.400: 99.4744% ( 1) 00:20:17.569 3.467 - 3.493: 99.4793% ( 1) 00:20:17.569 3.493 - 3.520: 99.4843% ( 1) 00:20:17.569 3.680 - 3.707: 99.4893% ( 1) 00:20:17.569 3.733 - 3.760: 99.4992% ( 2) 00:20:17.569 3.760 - 3.787: 99.5041% ( 1) 00:20:17.569 3.813 - 3.840: 99.5091% ( 1) 00:20:17.569 3.867 - 3.893: 99.5190% ( 2) 00:20:17.569 3.920 - 3.947: 99.5289% ( 2) 00:20:17.569 3.947 - 3.973: 99.5339% ( 1) 00:20:17.569 4.027 - 4.053: 99.5438% ( 2) 00:20:17.569 4.293 - 4.320: 99.5488% ( 1) 00:20:17.569 4.373 - 4.400: 99.5537% ( 1) 00:20:17.569 4.400 - 4.427: 99.5587% ( 1) 00:20:17.569 4.533 - 4.560: 99.5686% ( 2) 00:20:17.569 4.773 - 4.800: 99.5736% ( 1) 00:20:17.569 4.907 - 4.933: 99.5785% ( 1) 00:20:17.569 4.933 - 4.960: 99.5835% ( 1) 00:20:17.569 4.960 - 4.987: 99.5884% ( 1) 00:20:17.569 5.040 - 5.067: 99.5934% ( 1) 00:20:17.569 5.387 - 5.413: 99.5984% ( 1) 00:20:17.569 5.600 - 5.627: 99.6033% ( 1) 00:20:17.569 5.973 - 6.000: 99.6083% ( 1) 00:20:17.569 11.467 - 11.520: 99.6132% ( 1) 00:20:17.569 31.787 - 32.000: 99.6182% ( 1) 00:20:17.569 135.680 - 136.533: 99.6231% ( 1) 00:20:17.569 3986.773 - 4014.080: 100.0000% ( 76) 00:20:17.569 00:20:17.569 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:20:17.569 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:20:17.569 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:20:17.569 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:20:17.569 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:17.569 [ 00:20:17.569 { 00:20:17.569 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:17.569 "subtype": "Discovery", 00:20:17.569 "listen_addresses": [], 00:20:17.569 "allow_any_host": true, 00:20:17.569 "hosts": [] 00:20:17.569 }, 00:20:17.569 { 00:20:17.569 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:17.569 "subtype": "NVMe", 00:20:17.569 "listen_addresses": [ 00:20:17.569 { 00:20:17.569 "trtype": "VFIOUSER", 00:20:17.569 "adrfam": "IPv4", 00:20:17.569 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:17.569 "trsvcid": "0" 00:20:17.569 } 00:20:17.569 ], 00:20:17.569 "allow_any_host": true, 00:20:17.569 "hosts": [], 00:20:17.569 "serial_number": "SPDK1", 00:20:17.569 "model_number": "SPDK bdev Controller", 00:20:17.569 "max_namespaces": 32, 00:20:17.569 "min_cntlid": 1, 00:20:17.569 "max_cntlid": 65519, 00:20:17.569 "namespaces": [ 00:20:17.569 { 00:20:17.569 "nsid": 1, 00:20:17.569 "bdev_name": "Malloc1", 00:20:17.569 "name": "Malloc1", 00:20:17.569 "nguid": "08178C740C3642CBABD65D67C4071DFF", 00:20:17.569 "uuid": "08178c74-0c36-42cb-abd6-5d67c4071dff" 00:20:17.569 } 00:20:17.569 ] 00:20:17.569 }, 00:20:17.569 { 00:20:17.569 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:17.569 "subtype": "NVMe", 00:20:17.569 "listen_addresses": [ 00:20:17.569 { 00:20:17.569 "trtype": "VFIOUSER", 00:20:17.569 "adrfam": "IPv4", 00:20:17.569 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:17.569 "trsvcid": "0" 00:20:17.569 } 00:20:17.569 ], 00:20:17.569 "allow_any_host": true, 00:20:17.569 "hosts": [], 00:20:17.569 "serial_number": "SPDK2", 00:20:17.569 "model_number": "SPDK bdev Controller", 00:20:17.569 "max_namespaces": 32, 00:20:17.569 "min_cntlid": 1, 00:20:17.569 "max_cntlid": 65519, 00:20:17.569 "namespaces": [ 00:20:17.569 { 00:20:17.569 "nsid": 1, 00:20:17.569 "bdev_name": "Malloc2", 00:20:17.569 "name": "Malloc2", 00:20:17.569 "nguid": "B6D70931158C41739B8C900171B870AA", 00:20:17.569 "uuid": "b6d70931-158c-4173-9b8c-900171b870aa" 00:20:17.569 } 00:20:17.569 ] 00:20:17.569 } 00:20:17.569 ] 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1318770 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:20:17.829 [2024-12-05 12:03:42.798804] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:20:17.829 Malloc3 00:20:17.829 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:20:18.089 [2024-12-05 12:03:42.985117] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:20:18.089 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:18.089 Asynchronous Event Request test 00:20:18.089 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:20:18.089 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:20:18.089 Registering asynchronous event callbacks... 00:20:18.089 Starting namespace attribute notice tests for all controllers... 00:20:18.090 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:18.090 aer_cb - Changed Namespace 00:20:18.090 Cleaning up... 00:20:18.350 [ 00:20:18.350 { 00:20:18.350 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:18.350 "subtype": "Discovery", 00:20:18.350 "listen_addresses": [], 00:20:18.350 "allow_any_host": true, 00:20:18.350 "hosts": [] 00:20:18.350 }, 00:20:18.350 { 00:20:18.350 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:18.350 "subtype": "NVMe", 00:20:18.350 "listen_addresses": [ 00:20:18.350 { 00:20:18.350 "trtype": "VFIOUSER", 00:20:18.350 "adrfam": "IPv4", 00:20:18.350 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:18.350 "trsvcid": "0" 00:20:18.350 } 00:20:18.350 ], 00:20:18.350 "allow_any_host": true, 00:20:18.350 "hosts": [], 00:20:18.350 "serial_number": "SPDK1", 00:20:18.350 "model_number": "SPDK bdev Controller", 00:20:18.350 "max_namespaces": 32, 00:20:18.350 "min_cntlid": 1, 00:20:18.350 "max_cntlid": 65519, 00:20:18.350 "namespaces": [ 00:20:18.350 { 00:20:18.350 "nsid": 1, 00:20:18.350 "bdev_name": "Malloc1", 00:20:18.350 "name": "Malloc1", 00:20:18.350 "nguid": "08178C740C3642CBABD65D67C4071DFF", 00:20:18.350 "uuid": "08178c74-0c36-42cb-abd6-5d67c4071dff" 00:20:18.350 }, 00:20:18.350 { 00:20:18.350 "nsid": 2, 00:20:18.350 "bdev_name": "Malloc3", 00:20:18.350 "name": "Malloc3", 00:20:18.350 "nguid": "09881FAB0FE04B219AD368A74180A886", 00:20:18.350 "uuid": "09881fab-0fe0-4b21-9ad3-68a74180a886" 00:20:18.350 } 00:20:18.350 ] 00:20:18.350 }, 00:20:18.350 { 00:20:18.350 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:18.350 "subtype": "NVMe", 00:20:18.350 "listen_addresses": [ 00:20:18.350 { 00:20:18.350 "trtype": "VFIOUSER", 00:20:18.350 "adrfam": "IPv4", 00:20:18.351 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:18.351 "trsvcid": "0" 00:20:18.351 } 00:20:18.351 ], 00:20:18.351 "allow_any_host": true, 00:20:18.351 "hosts": [], 00:20:18.351 "serial_number": "SPDK2", 00:20:18.351 "model_number": "SPDK bdev Controller", 00:20:18.351 "max_namespaces": 32, 00:20:18.351 "min_cntlid": 1, 00:20:18.351 "max_cntlid": 65519, 00:20:18.351 "namespaces": [ 00:20:18.351 { 00:20:18.351 "nsid": 1, 00:20:18.351 "bdev_name": "Malloc2", 00:20:18.351 "name": "Malloc2", 00:20:18.351 "nguid": "B6D70931158C41739B8C900171B870AA", 00:20:18.351 "uuid": "b6d70931-158c-4173-9b8c-900171b870aa" 00:20:18.351 } 00:20:18.351 ] 00:20:18.351 } 00:20:18.351 ] 00:20:18.351 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1318770 00:20:18.351 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:18.351 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:18.351 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:20:18.351 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:20:18.351 [2024-12-05 12:03:43.226328] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:18.351 [2024-12-05 12:03:43.226384] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1319010 ] 00:20:18.351 [2024-12-05 12:03:43.266680] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:20:18.351 [2024-12-05 12:03:43.271863] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:18.351 [2024-12-05 12:03:43.271882] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd7b48f8000 00:20:18.351 [2024-12-05 12:03:43.272866] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.273871] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.274875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.275887] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.276898] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.277902] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.278905] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.279911] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:20:18.351 [2024-12-05 12:03:43.280920] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:20:18.351 [2024-12-05 12:03:43.280927] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd7b48ed000 00:20:18.351 [2024-12-05 12:03:43.281839] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:18.351 [2024-12-05 12:03:43.291214] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:20:18.351 [2024-12-05 12:03:43.291233] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:20:18.351 [2024-12-05 12:03:43.296296] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:18.351 [2024-12-05 12:03:43.296331] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:20:18.351 [2024-12-05 12:03:43.296391] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:20:18.351 [2024-12-05 12:03:43.296402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:20:18.351 [2024-12-05 12:03:43.296406] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:20:18.351 [2024-12-05 12:03:43.297298] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:20:18.351 [2024-12-05 12:03:43.297307] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:20:18.351 [2024-12-05 12:03:43.297313] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:20:18.351 [2024-12-05 12:03:43.298304] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:20:18.351 [2024-12-05 12:03:43.298312] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:20:18.351 [2024-12-05 12:03:43.298320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:20:18.351 [2024-12-05 12:03:43.299309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:20:18.351 [2024-12-05 12:03:43.299316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:18.351 [2024-12-05 12:03:43.300313] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:20:18.351 [2024-12-05 12:03:43.300321] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:20:18.351 [2024-12-05 12:03:43.300324] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:20:18.351 [2024-12-05 12:03:43.300329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:18.351 [2024-12-05 12:03:43.300435] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:20:18.351 [2024-12-05 12:03:43.300438] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:18.351 [2024-12-05 12:03:43.300442] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:20:18.351 [2024-12-05 12:03:43.301319] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:20:18.351 [2024-12-05 12:03:43.302324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:20:18.351 [2024-12-05 12:03:43.303336] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:18.351 [2024-12-05 12:03:43.304335] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:18.351 [2024-12-05 12:03:43.304366] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:18.351 [2024-12-05 12:03:43.305345] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:20:18.351 [2024-12-05 12:03:43.305351] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:18.351 [2024-12-05 12:03:43.305355] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:20:18.351 [2024-12-05 12:03:43.305369] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:20:18.351 [2024-12-05 12:03:43.305375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.305386] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:18.352 [2024-12-05 12:03:43.305390] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:18.352 [2024-12-05 12:03:43.305392] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.352 [2024-12-05 12:03:43.305401] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.313461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.313470] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:20:18.352 [2024-12-05 12:03:43.313474] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:20:18.352 [2024-12-05 12:03:43.313477] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:20:18.352 [2024-12-05 12:03:43.313480] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:20:18.352 [2024-12-05 12:03:43.313484] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:20:18.352 [2024-12-05 12:03:43.313487] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:20:18.352 [2024-12-05 12:03:43.313490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.313496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.313503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.321459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.321469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.352 [2024-12-05 12:03:43.321475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.352 [2024-12-05 12:03:43.321481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.352 [2024-12-05 12:03:43.321487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:20:18.352 [2024-12-05 12:03:43.321491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.321498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.321504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.329459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.329465] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:20:18.352 [2024-12-05 12:03:43.329469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.329476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.329480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.329487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.337459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.337508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.337514] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.337519] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:20:18.352 [2024-12-05 12:03:43.337523] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:20:18.352 [2024-12-05 12:03:43.337525] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.352 [2024-12-05 12:03:43.337530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.345459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.345469] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:20:18.352 [2024-12-05 12:03:43.345478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.345484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.345489] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:18.352 [2024-12-05 12:03:43.345492] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:18.352 [2024-12-05 12:03:43.345494] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.352 [2024-12-05 12:03:43.345499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.353460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.353470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.353475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.353481] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:20:18.352 [2024-12-05 12:03:43.353484] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:18.352 [2024-12-05 12:03:43.353486] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.352 [2024-12-05 12:03:43.353491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.361460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.361469] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361485] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361490] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361494] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361498] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:20:18.352 [2024-12-05 12:03:43.361501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:20:18.352 [2024-12-05 12:03:43.361505] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:20:18.352 [2024-12-05 12:03:43.361518] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.369459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.369470] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.377458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.377468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.385459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.385468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.393459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:20:18.352 [2024-12-05 12:03:43.393471] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:20:18.352 [2024-12-05 12:03:43.393474] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:20:18.352 [2024-12-05 12:03:43.393477] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:20:18.352 [2024-12-05 12:03:43.393479] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:20:18.352 [2024-12-05 12:03:43.393481] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:20:18.352 [2024-12-05 12:03:43.393486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:20:18.352 [2024-12-05 12:03:43.393491] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:20:18.352 [2024-12-05 12:03:43.393494] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:20:18.352 [2024-12-05 12:03:43.393497] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.352 [2024-12-05 12:03:43.393501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:20:18.352 [2024-12-05 12:03:43.393506] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:20:18.352 [2024-12-05 12:03:43.393509] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:20:18.352 [2024-12-05 12:03:43.393512] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.352 [2024-12-05 12:03:43.393516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:20:18.353 [2024-12-05 12:03:43.393523] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:20:18.353 [2024-12-05 12:03:43.393526] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:20:18.353 [2024-12-05 12:03:43.393529] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:20:18.353 [2024-12-05 12:03:43.393533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:20:18.613 [2024-12-05 12:03:43.401459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:20:18.613 [2024-12-05 12:03:43.401470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:20:18.613 [2024-12-05 12:03:43.401477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:20:18.613 [2024-12-05 12:03:43.401482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:20:18.613 ===================================================== 00:20:18.613 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:18.613 ===================================================== 00:20:18.613 Controller Capabilities/Features 00:20:18.613 ================================ 00:20:18.613 Vendor ID: 4e58 00:20:18.613 Subsystem Vendor ID: 4e58 00:20:18.613 Serial Number: SPDK2 00:20:18.613 Model Number: SPDK bdev Controller 00:20:18.613 Firmware Version: 25.01 00:20:18.613 Recommended Arb Burst: 6 00:20:18.613 IEEE OUI Identifier: 8d 6b 50 00:20:18.613 Multi-path I/O 00:20:18.613 May have multiple subsystem ports: Yes 00:20:18.613 May have multiple controllers: Yes 00:20:18.613 Associated with SR-IOV VF: No 00:20:18.613 Max Data Transfer Size: 131072 00:20:18.613 Max Number of Namespaces: 32 00:20:18.613 Max Number of I/O Queues: 127 00:20:18.613 NVMe Specification Version (VS): 1.3 00:20:18.613 NVMe Specification Version (Identify): 1.3 00:20:18.613 Maximum Queue Entries: 256 00:20:18.613 Contiguous Queues Required: Yes 00:20:18.613 Arbitration Mechanisms Supported 00:20:18.613 Weighted Round Robin: Not Supported 00:20:18.613 Vendor Specific: Not Supported 00:20:18.613 Reset Timeout: 15000 ms 00:20:18.613 Doorbell Stride: 4 bytes 00:20:18.613 NVM Subsystem Reset: Not Supported 00:20:18.613 Command Sets Supported 00:20:18.613 NVM Command Set: Supported 00:20:18.613 Boot Partition: Not Supported 00:20:18.613 Memory Page Size Minimum: 4096 bytes 00:20:18.613 Memory Page Size Maximum: 4096 bytes 00:20:18.613 Persistent Memory Region: Not Supported 00:20:18.613 Optional Asynchronous Events Supported 00:20:18.613 Namespace Attribute Notices: Supported 00:20:18.613 Firmware Activation Notices: Not Supported 00:20:18.613 ANA Change Notices: Not Supported 00:20:18.613 PLE Aggregate Log Change Notices: Not Supported 00:20:18.613 LBA Status Info Alert Notices: Not Supported 00:20:18.613 EGE Aggregate Log Change Notices: Not Supported 00:20:18.613 Normal NVM Subsystem Shutdown event: Not Supported 00:20:18.613 Zone Descriptor Change Notices: Not Supported 00:20:18.613 Discovery Log Change Notices: Not Supported 00:20:18.613 Controller Attributes 00:20:18.613 128-bit Host Identifier: Supported 00:20:18.613 Non-Operational Permissive Mode: Not Supported 00:20:18.614 NVM Sets: Not Supported 00:20:18.614 Read Recovery Levels: Not Supported 00:20:18.614 Endurance Groups: Not Supported 00:20:18.614 Predictable Latency Mode: Not Supported 00:20:18.614 Traffic Based Keep ALive: Not Supported 00:20:18.614 Namespace Granularity: Not Supported 00:20:18.614 SQ Associations: Not Supported 00:20:18.614 UUID List: Not Supported 00:20:18.614 Multi-Domain Subsystem: Not Supported 00:20:18.614 Fixed Capacity Management: Not Supported 00:20:18.614 Variable Capacity Management: Not Supported 00:20:18.614 Delete Endurance Group: Not Supported 00:20:18.614 Delete NVM Set: Not Supported 00:20:18.614 Extended LBA Formats Supported: Not Supported 00:20:18.614 Flexible Data Placement Supported: Not Supported 00:20:18.614 00:20:18.614 Controller Memory Buffer Support 00:20:18.614 ================================ 00:20:18.614 Supported: No 00:20:18.614 00:20:18.614 Persistent Memory Region Support 00:20:18.614 ================================ 00:20:18.614 Supported: No 00:20:18.614 00:20:18.614 Admin Command Set Attributes 00:20:18.614 ============================ 00:20:18.614 Security Send/Receive: Not Supported 00:20:18.614 Format NVM: Not Supported 00:20:18.614 Firmware Activate/Download: Not Supported 00:20:18.614 Namespace Management: Not Supported 00:20:18.614 Device Self-Test: Not Supported 00:20:18.614 Directives: Not Supported 00:20:18.614 NVMe-MI: Not Supported 00:20:18.614 Virtualization Management: Not Supported 00:20:18.614 Doorbell Buffer Config: Not Supported 00:20:18.614 Get LBA Status Capability: Not Supported 00:20:18.614 Command & Feature Lockdown Capability: Not Supported 00:20:18.614 Abort Command Limit: 4 00:20:18.614 Async Event Request Limit: 4 00:20:18.614 Number of Firmware Slots: N/A 00:20:18.614 Firmware Slot 1 Read-Only: N/A 00:20:18.614 Firmware Activation Without Reset: N/A 00:20:18.614 Multiple Update Detection Support: N/A 00:20:18.614 Firmware Update Granularity: No Information Provided 00:20:18.614 Per-Namespace SMART Log: No 00:20:18.614 Asymmetric Namespace Access Log Page: Not Supported 00:20:18.614 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:20:18.614 Command Effects Log Page: Supported 00:20:18.614 Get Log Page Extended Data: Supported 00:20:18.614 Telemetry Log Pages: Not Supported 00:20:18.614 Persistent Event Log Pages: Not Supported 00:20:18.614 Supported Log Pages Log Page: May Support 00:20:18.614 Commands Supported & Effects Log Page: Not Supported 00:20:18.614 Feature Identifiers & Effects Log Page:May Support 00:20:18.614 NVMe-MI Commands & Effects Log Page: May Support 00:20:18.614 Data Area 4 for Telemetry Log: Not Supported 00:20:18.614 Error Log Page Entries Supported: 128 00:20:18.614 Keep Alive: Supported 00:20:18.614 Keep Alive Granularity: 10000 ms 00:20:18.614 00:20:18.614 NVM Command Set Attributes 00:20:18.614 ========================== 00:20:18.614 Submission Queue Entry Size 00:20:18.614 Max: 64 00:20:18.614 Min: 64 00:20:18.614 Completion Queue Entry Size 00:20:18.614 Max: 16 00:20:18.614 Min: 16 00:20:18.614 Number of Namespaces: 32 00:20:18.614 Compare Command: Supported 00:20:18.614 Write Uncorrectable Command: Not Supported 00:20:18.614 Dataset Management Command: Supported 00:20:18.614 Write Zeroes Command: Supported 00:20:18.614 Set Features Save Field: Not Supported 00:20:18.614 Reservations: Not Supported 00:20:18.614 Timestamp: Not Supported 00:20:18.614 Copy: Supported 00:20:18.614 Volatile Write Cache: Present 00:20:18.614 Atomic Write Unit (Normal): 1 00:20:18.614 Atomic Write Unit (PFail): 1 00:20:18.614 Atomic Compare & Write Unit: 1 00:20:18.614 Fused Compare & Write: Supported 00:20:18.614 Scatter-Gather List 00:20:18.614 SGL Command Set: Supported (Dword aligned) 00:20:18.614 SGL Keyed: Not Supported 00:20:18.614 SGL Bit Bucket Descriptor: Not Supported 00:20:18.614 SGL Metadata Pointer: Not Supported 00:20:18.614 Oversized SGL: Not Supported 00:20:18.614 SGL Metadata Address: Not Supported 00:20:18.614 SGL Offset: Not Supported 00:20:18.614 Transport SGL Data Block: Not Supported 00:20:18.614 Replay Protected Memory Block: Not Supported 00:20:18.614 00:20:18.614 Firmware Slot Information 00:20:18.614 ========================= 00:20:18.614 Active slot: 1 00:20:18.614 Slot 1 Firmware Revision: 25.01 00:20:18.614 00:20:18.614 00:20:18.614 Commands Supported and Effects 00:20:18.614 ============================== 00:20:18.614 Admin Commands 00:20:18.614 -------------- 00:20:18.614 Get Log Page (02h): Supported 00:20:18.614 Identify (06h): Supported 00:20:18.614 Abort (08h): Supported 00:20:18.614 Set Features (09h): Supported 00:20:18.614 Get Features (0Ah): Supported 00:20:18.614 Asynchronous Event Request (0Ch): Supported 00:20:18.614 Keep Alive (18h): Supported 00:20:18.614 I/O Commands 00:20:18.614 ------------ 00:20:18.614 Flush (00h): Supported LBA-Change 00:20:18.614 Write (01h): Supported LBA-Change 00:20:18.614 Read (02h): Supported 00:20:18.614 Compare (05h): Supported 00:20:18.614 Write Zeroes (08h): Supported LBA-Change 00:20:18.614 Dataset Management (09h): Supported LBA-Change 00:20:18.614 Copy (19h): Supported LBA-Change 00:20:18.614 00:20:18.614 Error Log 00:20:18.614 ========= 00:20:18.614 00:20:18.614 Arbitration 00:20:18.614 =========== 00:20:18.614 Arbitration Burst: 1 00:20:18.614 00:20:18.614 Power Management 00:20:18.614 ================ 00:20:18.614 Number of Power States: 1 00:20:18.614 Current Power State: Power State #0 00:20:18.614 Power State #0: 00:20:18.614 Max Power: 0.00 W 00:20:18.614 Non-Operational State: Operational 00:20:18.614 Entry Latency: Not Reported 00:20:18.614 Exit Latency: Not Reported 00:20:18.614 Relative Read Throughput: 0 00:20:18.614 Relative Read Latency: 0 00:20:18.614 Relative Write Throughput: 0 00:20:18.614 Relative Write Latency: 0 00:20:18.614 Idle Power: Not Reported 00:20:18.614 Active Power: Not Reported 00:20:18.614 Non-Operational Permissive Mode: Not Supported 00:20:18.614 00:20:18.614 Health Information 00:20:18.614 ================== 00:20:18.614 Critical Warnings: 00:20:18.614 Available Spare Space: OK 00:20:18.614 Temperature: OK 00:20:18.614 Device Reliability: OK 00:20:18.614 Read Only: No 00:20:18.614 Volatile Memory Backup: OK 00:20:18.614 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:18.614 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:18.614 Available Spare: 0% 00:20:18.614 Available Sp[2024-12-05 12:03:43.401557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:20:18.614 [2024-12-05 12:03:43.409459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:20:18.614 [2024-12-05 12:03:43.409484] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:20:18.614 [2024-12-05 12:03:43.409491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-12-05 12:03:43.409496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-12-05 12:03:43.409500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-12-05 12:03:43.409504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:18.614 [2024-12-05 12:03:43.409539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:20:18.615 [2024-12-05 12:03:43.409547] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:20:18.615 [2024-12-05 12:03:43.410541] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:18.615 [2024-12-05 12:03:43.410577] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:20:18.615 [2024-12-05 12:03:43.410582] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:20:18.615 [2024-12-05 12:03:43.411549] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:20:18.615 [2024-12-05 12:03:43.411558] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:20:18.615 [2024-12-05 12:03:43.411598] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:20:18.615 [2024-12-05 12:03:43.412569] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:20:18.615 are Threshold: 0% 00:20:18.615 Life Percentage Used: 0% 00:20:18.615 Data Units Read: 0 00:20:18.615 Data Units Written: 0 00:20:18.615 Host Read Commands: 0 00:20:18.615 Host Write Commands: 0 00:20:18.615 Controller Busy Time: 0 minutes 00:20:18.615 Power Cycles: 0 00:20:18.615 Power On Hours: 0 hours 00:20:18.615 Unsafe Shutdowns: 0 00:20:18.615 Unrecoverable Media Errors: 0 00:20:18.615 Lifetime Error Log Entries: 0 00:20:18.615 Warning Temperature Time: 0 minutes 00:20:18.615 Critical Temperature Time: 0 minutes 00:20:18.615 00:20:18.615 Number of Queues 00:20:18.615 ================ 00:20:18.615 Number of I/O Submission Queues: 127 00:20:18.615 Number of I/O Completion Queues: 127 00:20:18.615 00:20:18.615 Active Namespaces 00:20:18.615 ================= 00:20:18.615 Namespace ID:1 00:20:18.615 Error Recovery Timeout: Unlimited 00:20:18.615 Command Set Identifier: NVM (00h) 00:20:18.615 Deallocate: Supported 00:20:18.615 Deallocated/Unwritten Error: Not Supported 00:20:18.615 Deallocated Read Value: Unknown 00:20:18.615 Deallocate in Write Zeroes: Not Supported 00:20:18.615 Deallocated Guard Field: 0xFFFF 00:20:18.615 Flush: Supported 00:20:18.615 Reservation: Supported 00:20:18.615 Namespace Sharing Capabilities: Multiple Controllers 00:20:18.615 Size (in LBAs): 131072 (0GiB) 00:20:18.615 Capacity (in LBAs): 131072 (0GiB) 00:20:18.615 Utilization (in LBAs): 131072 (0GiB) 00:20:18.615 NGUID: B6D70931158C41739B8C900171B870AA 00:20:18.615 UUID: b6d70931-158c-4173-9b8c-900171b870aa 00:20:18.615 Thin Provisioning: Not Supported 00:20:18.615 Per-NS Atomic Units: Yes 00:20:18.615 Atomic Boundary Size (Normal): 0 00:20:18.615 Atomic Boundary Size (PFail): 0 00:20:18.615 Atomic Boundary Offset: 0 00:20:18.615 Maximum Single Source Range Length: 65535 00:20:18.615 Maximum Copy Length: 65535 00:20:18.615 Maximum Source Range Count: 1 00:20:18.615 NGUID/EUI64 Never Reused: No 00:20:18.615 Namespace Write Protected: No 00:20:18.615 Number of LBA Formats: 1 00:20:18.615 Current LBA Format: LBA Format #00 00:20:18.615 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:18.615 00:20:18.615 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:20:18.615 [2024-12-05 12:03:43.600838] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:23.923 Initializing NVMe Controllers 00:20:23.923 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:23.923 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:23.923 Initialization complete. Launching workers. 00:20:23.923 ======================================================== 00:20:23.923 Latency(us) 00:20:23.923 Device Information : IOPS MiB/s Average min max 00:20:23.923 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39981.79 156.18 3201.33 865.10 6782.36 00:20:23.923 ======================================================== 00:20:23.923 Total : 39981.79 156.18 3201.33 865.10 6782.36 00:20:23.923 00:20:23.923 [2024-12-05 12:03:48.706646] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:23.923 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:20:23.923 [2024-12-05 12:03:48.896205] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:29.209 Initializing NVMe Controllers 00:20:29.209 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:29.209 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:20:29.209 Initialization complete. Launching workers. 00:20:29.209 ======================================================== 00:20:29.209 Latency(us) 00:20:29.209 Device Information : IOPS MiB/s Average min max 00:20:29.209 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39957.98 156.09 3203.23 870.25 9750.38 00:20:29.209 ======================================================== 00:20:29.209 Total : 39957.98 156.09 3203.23 870.25 9750.38 00:20:29.209 00:20:29.209 [2024-12-05 12:03:53.916173] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:29.209 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:20:29.209 [2024-12-05 12:03:54.116332] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:34.497 [2024-12-05 12:03:59.251538] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:34.497 Initializing NVMe Controllers 00:20:34.497 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:34.497 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:20:34.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:20:34.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:20:34.497 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:20:34.497 Initialization complete. Launching workers. 00:20:34.497 Starting thread on core 2 00:20:34.497 Starting thread on core 3 00:20:34.497 Starting thread on core 1 00:20:34.497 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:20:34.497 [2024-12-05 12:03:59.499874] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:37.796 [2024-12-05 12:04:02.556977] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:37.796 Initializing NVMe Controllers 00:20:37.796 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:37.796 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:37.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:20:37.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:20:37.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:20:37.796 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:20:37.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:20:37.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:20:37.796 Initialization complete. Launching workers. 00:20:37.796 Starting thread on core 1 with urgent priority queue 00:20:37.796 Starting thread on core 2 with urgent priority queue 00:20:37.796 Starting thread on core 3 with urgent priority queue 00:20:37.796 Starting thread on core 0 with urgent priority queue 00:20:37.796 SPDK bdev Controller (SPDK2 ) core 0: 17126.00 IO/s 5.84 secs/100000 ios 00:20:37.796 SPDK bdev Controller (SPDK2 ) core 1: 11973.00 IO/s 8.35 secs/100000 ios 00:20:37.796 SPDK bdev Controller (SPDK2 ) core 2: 13184.33 IO/s 7.58 secs/100000 ios 00:20:37.796 SPDK bdev Controller (SPDK2 ) core 3: 8474.67 IO/s 11.80 secs/100000 ios 00:20:37.796 ======================================================== 00:20:37.796 00:20:37.796 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:37.796 [2024-12-05 12:04:02.792824] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:37.796 Initializing NVMe Controllers 00:20:37.796 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:37.796 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:37.796 Namespace ID: 1 size: 0GB 00:20:37.796 Initialization complete. 00:20:37.796 INFO: using host memory buffer for IO 00:20:37.796 Hello world! 00:20:37.796 [2024-12-05 12:04:02.802887] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:37.796 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:20:38.057 [2024-12-05 12:04:03.041120] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:39.441 Initializing NVMe Controllers 00:20:39.441 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:39.441 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:39.442 Initialization complete. Launching workers. 00:20:39.442 submit (in ns) avg, min, max = 5333.7, 2823.3, 3998056.7 00:20:39.442 complete (in ns) avg, min, max = 16926.5, 1632.5, 4019906.7 00:20:39.442 00:20:39.442 Submit histogram 00:20:39.442 ================ 00:20:39.442 Range in us Cumulative Count 00:20:39.442 2.813 - 2.827: 0.0098% ( 2) 00:20:39.442 2.827 - 2.840: 0.5850% ( 117) 00:20:39.442 2.840 - 2.853: 2.4137% ( 372) 00:20:39.442 2.853 - 2.867: 6.0466% ( 739) 00:20:39.442 2.867 - 2.880: 11.2722% ( 1063) 00:20:39.442 2.880 - 2.893: 17.5204% ( 1271) 00:20:39.442 2.893 - 2.907: 22.9918% ( 1113) 00:20:39.442 2.907 - 2.920: 29.2597% ( 1275) 00:20:39.442 2.920 - 2.933: 35.0162% ( 1171) 00:20:39.442 2.933 - 2.947: 39.5733% ( 927) 00:20:39.442 2.947 - 2.960: 44.6760% ( 1038) 00:20:39.442 2.960 - 2.973: 50.6489% ( 1215) 00:20:39.442 2.973 - 2.987: 57.1084% ( 1314) 00:20:39.442 2.987 - 3.000: 64.5266% ( 1509) 00:20:39.442 3.000 - 3.013: 73.0213% ( 1728) 00:20:39.442 3.013 - 3.027: 80.2527% ( 1471) 00:20:39.442 3.027 - 3.040: 86.4615% ( 1263) 00:20:39.442 3.040 - 3.053: 92.0362% ( 1134) 00:20:39.442 3.053 - 3.067: 95.8215% ( 770) 00:20:39.442 3.067 - 3.080: 97.8468% ( 412) 00:20:39.442 3.080 - 3.093: 98.8743% ( 209) 00:20:39.442 3.093 - 3.107: 99.2872% ( 84) 00:20:39.442 3.107 - 3.120: 99.4396% ( 31) 00:20:39.442 3.120 - 3.133: 99.4937% ( 11) 00:20:39.442 3.133 - 3.147: 99.5182% ( 5) 00:20:39.442 3.147 - 3.160: 99.5330% ( 3) 00:20:39.442 3.160 - 3.173: 99.5379% ( 1) 00:20:39.442 3.200 - 3.213: 99.5428% ( 1) 00:20:39.442 3.253 - 3.267: 99.5526% ( 2) 00:20:39.442 3.413 - 3.440: 99.5625% ( 2) 00:20:39.442 3.520 - 3.547: 99.5674% ( 1) 00:20:39.442 4.027 - 4.053: 99.5723% ( 1) 00:20:39.442 4.053 - 4.080: 99.5772% ( 1) 00:20:39.442 4.107 - 4.133: 99.5821% ( 1) 00:20:39.442 4.133 - 4.160: 99.5871% ( 1) 00:20:39.442 4.160 - 4.187: 99.5920% ( 1) 00:20:39.442 4.240 - 4.267: 99.5969% ( 1) 00:20:39.442 4.347 - 4.373: 99.6018% ( 1) 00:20:39.442 4.400 - 4.427: 99.6116% ( 2) 00:20:39.442 4.427 - 4.453: 99.6166% ( 1) 00:20:39.442 4.480 - 4.507: 99.6215% ( 1) 00:20:39.442 4.533 - 4.560: 99.6264% ( 1) 00:20:39.442 4.640 - 4.667: 99.6362% ( 2) 00:20:39.442 4.667 - 4.693: 99.6461% ( 2) 00:20:39.442 4.880 - 4.907: 99.6510% ( 1) 00:20:39.442 4.907 - 4.933: 99.6559% ( 1) 00:20:39.442 4.987 - 5.013: 99.6657% ( 2) 00:20:39.442 5.013 - 5.040: 99.6755% ( 2) 00:20:39.442 5.040 - 5.067: 99.6805% ( 1) 00:20:39.442 5.067 - 5.093: 99.6854% ( 1) 00:20:39.442 5.093 - 5.120: 99.6952% ( 2) 00:20:39.442 5.173 - 5.200: 99.7050% ( 2) 00:20:39.442 5.200 - 5.227: 99.7100% ( 1) 00:20:39.442 5.253 - 5.280: 99.7149% ( 1) 00:20:39.442 5.307 - 5.333: 99.7198% ( 1) 00:20:39.442 5.387 - 5.413: 99.7345% ( 3) 00:20:39.442 5.440 - 5.467: 99.7444% ( 2) 00:20:39.442 5.493 - 5.520: 99.7493% ( 1) 00:20:39.442 5.573 - 5.600: 99.7542% ( 1) 00:20:39.442 5.627 - 5.653: 99.7591% ( 1) 00:20:39.442 5.653 - 5.680: 99.7690% ( 2) 00:20:39.442 5.680 - 5.707: 99.7739% ( 1) 00:20:39.442 5.760 - 5.787: 99.7837% ( 2) 00:20:39.442 5.813 - 5.840: 99.7886% ( 1) 00:20:39.442 5.840 - 5.867: 99.7935% ( 1) 00:20:39.442 5.867 - 5.893: 99.7984% ( 1) 00:20:39.442 5.893 - 5.920: 99.8083% ( 2) 00:20:39.442 5.920 - 5.947: 99.8132% ( 1) 00:20:39.442 5.973 - 6.000: 99.8181% ( 1) 00:20:39.442 6.027 - 6.053: 99.8230% ( 1) 00:20:39.442 6.053 - 6.080: 99.8378% ( 3) 00:20:39.442 6.080 - 6.107: 99.8476% ( 2) 00:20:39.442 6.107 - 6.133: 99.8574% ( 2) 00:20:39.442 6.133 - 6.160: 99.8624% ( 1) 00:20:39.442 6.213 - 6.240: 99.8673% ( 1) 00:20:39.442 6.240 - 6.267: 99.8771% ( 2) 00:20:39.442 6.267 - 6.293: 99.8820% ( 1) 00:20:39.442 6.320 - 6.347: 99.8869% ( 1) 00:20:39.442 6.533 - 6.560: 99.8918% ( 1) 00:20:39.442 [2024-12-05 12:04:04.133962] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:39.442 6.587 - 6.613: 99.8968% ( 1) 00:20:39.442 6.613 - 6.640: 99.9017% ( 1) 00:20:39.442 6.640 - 6.667: 99.9066% ( 1) 00:20:39.442 6.667 - 6.693: 99.9115% ( 1) 00:20:39.442 6.747 - 6.773: 99.9164% ( 1) 00:20:39.442 7.040 - 7.093: 99.9213% ( 1) 00:20:39.442 7.947 - 8.000: 99.9263% ( 1) 00:20:39.442 12.320 - 12.373: 99.9312% ( 1) 00:20:39.442 13.280 - 13.333: 99.9361% ( 1) 00:20:39.442 50.773 - 50.987: 99.9410% ( 1) 00:20:39.442 3986.773 - 4014.080: 100.0000% ( 12) 00:20:39.442 00:20:39.442 Complete histogram 00:20:39.442 ================== 00:20:39.442 Range in us Cumulative Count 00:20:39.442 1.627 - 1.633: 0.0049% ( 1) 00:20:39.442 1.633 - 1.640: 0.0098% ( 1) 00:20:39.442 1.640 - 1.647: 0.3195% ( 63) 00:20:39.442 1.647 - 1.653: 1.0864% ( 156) 00:20:39.442 1.653 - 1.660: 1.1602% ( 15) 00:20:39.442 1.660 - 1.667: 1.2929% ( 27) 00:20:39.442 1.667 - 1.673: 1.4010% ( 22) 00:20:39.442 1.673 - 1.680: 1.4305% ( 6) 00:20:39.442 1.680 - 1.687: 35.5471% ( 6940) 00:20:39.442 1.687 - 1.693: 56.4595% ( 4254) 00:20:39.442 1.693 - 1.700: 61.4738% ( 1020) 00:20:39.442 1.700 - 1.707: 74.5109% ( 2652) 00:20:39.442 1.707 - 1.720: 81.8405% ( 1491) 00:20:39.442 1.720 - 1.733: 83.8659% ( 412) 00:20:39.442 1.733 - 1.747: 86.2157% ( 478) 00:20:39.442 1.747 - 1.760: 90.8564% ( 944) 00:20:39.442 1.760 - 1.773: 96.0869% ( 1064) 00:20:39.442 1.773 - 1.787: 98.4613% ( 483) 00:20:39.442 1.787 - 1.800: 99.2675% ( 164) 00:20:39.442 1.800 - 1.813: 99.4052% ( 28) 00:20:39.442 1.813 - 1.827: 99.4347% ( 6) 00:20:39.442 1.827 - 1.840: 99.4396% ( 1) 00:20:39.442 1.880 - 1.893: 99.4445% ( 1) 00:20:39.442 3.253 - 3.267: 99.4494% ( 1) 00:20:39.442 3.413 - 3.440: 99.4543% ( 1) 00:20:39.442 3.440 - 3.467: 99.4592% ( 1) 00:20:39.442 3.493 - 3.520: 99.4642% ( 1) 00:20:39.442 3.573 - 3.600: 99.4691% ( 1) 00:20:39.442 3.600 - 3.627: 99.4740% ( 1) 00:20:39.442 3.680 - 3.707: 99.4838% ( 2) 00:20:39.442 3.813 - 3.840: 99.4887% ( 1) 00:20:39.442 3.867 - 3.893: 99.4937% ( 1) 00:20:39.442 3.947 - 3.973: 99.4986% ( 1) 00:20:39.442 4.053 - 4.080: 99.5035% ( 1) 00:20:39.442 4.107 - 4.133: 99.5084% ( 1) 00:20:39.442 4.240 - 4.267: 99.5133% ( 1) 00:20:39.442 4.267 - 4.293: 99.5182% ( 1) 00:20:39.442 4.480 - 4.507: 99.5232% ( 1) 00:20:39.442 4.507 - 4.533: 99.5330% ( 2) 00:20:39.442 4.560 - 4.587: 99.5379% ( 1) 00:20:39.442 4.587 - 4.613: 99.5477% ( 2) 00:20:39.442 4.693 - 4.720: 99.5526% ( 1) 00:20:39.442 4.720 - 4.747: 99.5576% ( 1) 00:20:39.442 4.747 - 4.773: 99.5625% ( 1) 00:20:39.442 4.800 - 4.827: 99.5674% ( 1) 00:20:39.442 4.880 - 4.907: 99.5723% ( 1) 00:20:39.442 4.960 - 4.987: 99.5772% ( 1) 00:20:39.442 5.013 - 5.040: 99.5821% ( 1) 00:20:39.442 5.093 - 5.120: 99.5871% ( 1) 00:20:39.442 5.227 - 5.253: 99.5920% ( 1) 00:20:39.442 5.333 - 5.360: 99.5969% ( 1) 00:20:39.442 8.480 - 8.533: 99.6018% ( 1) 00:20:39.442 15.147 - 15.253: 99.6067% ( 1) 00:20:39.442 34.133 - 34.347: 99.6116% ( 1) 00:20:39.442 69.973 - 70.400: 99.6166% ( 1) 00:20:39.442 1884.160 - 1897.813: 99.6215% ( 1) 00:20:39.442 3986.773 - 4014.080: 99.9951% ( 76) 00:20:39.442 4014.080 - 4041.387: 100.0000% ( 1) 00:20:39.442 00:20:39.442 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:39.443 [ 00:20:39.443 { 00:20:39.443 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:39.443 "subtype": "Discovery", 00:20:39.443 "listen_addresses": [], 00:20:39.443 "allow_any_host": true, 00:20:39.443 "hosts": [] 00:20:39.443 }, 00:20:39.443 { 00:20:39.443 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:39.443 "subtype": "NVMe", 00:20:39.443 "listen_addresses": [ 00:20:39.443 { 00:20:39.443 "trtype": "VFIOUSER", 00:20:39.443 "adrfam": "IPv4", 00:20:39.443 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:39.443 "trsvcid": "0" 00:20:39.443 } 00:20:39.443 ], 00:20:39.443 "allow_any_host": true, 00:20:39.443 "hosts": [], 00:20:39.443 "serial_number": "SPDK1", 00:20:39.443 "model_number": "SPDK bdev Controller", 00:20:39.443 "max_namespaces": 32, 00:20:39.443 "min_cntlid": 1, 00:20:39.443 "max_cntlid": 65519, 00:20:39.443 "namespaces": [ 00:20:39.443 { 00:20:39.443 "nsid": 1, 00:20:39.443 "bdev_name": "Malloc1", 00:20:39.443 "name": "Malloc1", 00:20:39.443 "nguid": "08178C740C3642CBABD65D67C4071DFF", 00:20:39.443 "uuid": "08178c74-0c36-42cb-abd6-5d67c4071dff" 00:20:39.443 }, 00:20:39.443 { 00:20:39.443 "nsid": 2, 00:20:39.443 "bdev_name": "Malloc3", 00:20:39.443 "name": "Malloc3", 00:20:39.443 "nguid": "09881FAB0FE04B219AD368A74180A886", 00:20:39.443 "uuid": "09881fab-0fe0-4b21-9ad3-68a74180a886" 00:20:39.443 } 00:20:39.443 ] 00:20:39.443 }, 00:20:39.443 { 00:20:39.443 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:39.443 "subtype": "NVMe", 00:20:39.443 "listen_addresses": [ 00:20:39.443 { 00:20:39.443 "trtype": "VFIOUSER", 00:20:39.443 "adrfam": "IPv4", 00:20:39.443 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:39.443 "trsvcid": "0" 00:20:39.443 } 00:20:39.443 ], 00:20:39.443 "allow_any_host": true, 00:20:39.443 "hosts": [], 00:20:39.443 "serial_number": "SPDK2", 00:20:39.443 "model_number": "SPDK bdev Controller", 00:20:39.443 "max_namespaces": 32, 00:20:39.443 "min_cntlid": 1, 00:20:39.443 "max_cntlid": 65519, 00:20:39.443 "namespaces": [ 00:20:39.443 { 00:20:39.443 "nsid": 1, 00:20:39.443 "bdev_name": "Malloc2", 00:20:39.443 "name": "Malloc2", 00:20:39.443 "nguid": "B6D70931158C41739B8C900171B870AA", 00:20:39.443 "uuid": "b6d70931-158c-4173-9b8c-900171b870aa" 00:20:39.443 } 00:20:39.443 ] 00:20:39.443 } 00:20:39.443 ] 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1323124 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:20:39.443 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:20:39.704 [2024-12-05 12:04:04.511779] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:20:39.704 Malloc4 00:20:39.704 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:20:39.704 [2024-12-05 12:04:04.707134] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:20:39.704 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:20:39.704 Asynchronous Event Request test 00:20:39.704 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:20:39.704 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:20:39.704 Registering asynchronous event callbacks... 00:20:39.704 Starting namespace attribute notice tests for all controllers... 00:20:39.704 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:39.704 aer_cb - Changed Namespace 00:20:39.704 Cleaning up... 00:20:39.964 [ 00:20:39.964 { 00:20:39.964 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:39.964 "subtype": "Discovery", 00:20:39.964 "listen_addresses": [], 00:20:39.964 "allow_any_host": true, 00:20:39.964 "hosts": [] 00:20:39.964 }, 00:20:39.964 { 00:20:39.964 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:20:39.964 "subtype": "NVMe", 00:20:39.964 "listen_addresses": [ 00:20:39.964 { 00:20:39.964 "trtype": "VFIOUSER", 00:20:39.964 "adrfam": "IPv4", 00:20:39.964 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:20:39.964 "trsvcid": "0" 00:20:39.964 } 00:20:39.964 ], 00:20:39.964 "allow_any_host": true, 00:20:39.964 "hosts": [], 00:20:39.964 "serial_number": "SPDK1", 00:20:39.964 "model_number": "SPDK bdev Controller", 00:20:39.964 "max_namespaces": 32, 00:20:39.964 "min_cntlid": 1, 00:20:39.964 "max_cntlid": 65519, 00:20:39.964 "namespaces": [ 00:20:39.964 { 00:20:39.964 "nsid": 1, 00:20:39.964 "bdev_name": "Malloc1", 00:20:39.964 "name": "Malloc1", 00:20:39.964 "nguid": "08178C740C3642CBABD65D67C4071DFF", 00:20:39.964 "uuid": "08178c74-0c36-42cb-abd6-5d67c4071dff" 00:20:39.964 }, 00:20:39.964 { 00:20:39.964 "nsid": 2, 00:20:39.964 "bdev_name": "Malloc3", 00:20:39.964 "name": "Malloc3", 00:20:39.964 "nguid": "09881FAB0FE04B219AD368A74180A886", 00:20:39.964 "uuid": "09881fab-0fe0-4b21-9ad3-68a74180a886" 00:20:39.964 } 00:20:39.964 ] 00:20:39.964 }, 00:20:39.964 { 00:20:39.964 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:20:39.964 "subtype": "NVMe", 00:20:39.964 "listen_addresses": [ 00:20:39.964 { 00:20:39.964 "trtype": "VFIOUSER", 00:20:39.964 "adrfam": "IPv4", 00:20:39.964 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:20:39.964 "trsvcid": "0" 00:20:39.964 } 00:20:39.964 ], 00:20:39.964 "allow_any_host": true, 00:20:39.964 "hosts": [], 00:20:39.964 "serial_number": "SPDK2", 00:20:39.964 "model_number": "SPDK bdev Controller", 00:20:39.964 "max_namespaces": 32, 00:20:39.964 "min_cntlid": 1, 00:20:39.964 "max_cntlid": 65519, 00:20:39.964 "namespaces": [ 00:20:39.964 { 00:20:39.964 "nsid": 1, 00:20:39.964 "bdev_name": "Malloc2", 00:20:39.964 "name": "Malloc2", 00:20:39.964 "nguid": "B6D70931158C41739B8C900171B870AA", 00:20:39.964 "uuid": "b6d70931-158c-4173-9b8c-900171b870aa" 00:20:39.964 }, 00:20:39.964 { 00:20:39.964 "nsid": 2, 00:20:39.964 "bdev_name": "Malloc4", 00:20:39.964 "name": "Malloc4", 00:20:39.964 "nguid": "06E567D703174A77A6F428CC6CBB92D4", 00:20:39.964 "uuid": "06e567d7-0317-4a77-a6f4-28cc6cbb92d4" 00:20:39.964 } 00:20:39.964 ] 00:20:39.964 } 00:20:39.964 ] 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1323124 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1314044 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1314044 ']' 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1314044 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1314044 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1314044' 00:20:39.964 killing process with pid 1314044 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1314044 00:20:39.964 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1314044 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1323161 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1323161' 00:20:40.224 Process pid: 1323161 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1323161 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1323161 ']' 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.224 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:40.224 [2024-12-05 12:04:05.190514] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:20:40.224 [2024-12-05 12:04:05.191442] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:40.224 [2024-12-05 12:04:05.191493] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.484 [2024-12-05 12:04:05.277910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.484 [2024-12-05 12:04:05.306786] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.484 [2024-12-05 12:04:05.306819] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.484 [2024-12-05 12:04:05.306825] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.484 [2024-12-05 12:04:05.306830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.484 [2024-12-05 12:04:05.306834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.484 [2024-12-05 12:04:05.308004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.484 [2024-12-05 12:04:05.308158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.484 [2024-12-05 12:04:05.308305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.484 [2024-12-05 12:04:05.308307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.484 [2024-12-05 12:04:05.360049] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:20:40.484 [2024-12-05 12:04:05.361060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:20:40.484 [2024-12-05 12:04:05.361903] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:20:40.484 [2024-12-05 12:04:05.362514] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:20:40.484 [2024-12-05 12:04:05.362555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:20:41.054 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.054 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:20:41.054 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:20:41.995 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:20:42.255 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:20:42.255 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:20:42.255 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:42.255 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:20:42.255 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:42.516 Malloc1 00:20:42.516 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:20:42.777 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:20:42.777 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:20:43.037 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:20:43.037 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:20:43.037 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:20:43.299 Malloc2 00:20:43.299 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:20:43.299 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:20:43.561 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1323161 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1323161 ']' 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1323161 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1323161 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1323161' 00:20:43.822 killing process with pid 1323161 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1323161 00:20:43.822 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1323161 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:20:44.083 00:20:44.083 real 0m50.879s 00:20:44.083 user 3m15.093s 00:20:44.083 sys 0m2.568s 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:20:44.083 ************************************ 00:20:44.083 END TEST nvmf_vfio_user 00:20:44.083 ************************************ 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:44.083 12:04:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:44.084 ************************************ 00:20:44.084 START TEST nvmf_vfio_user_nvme_compliance 00:20:44.084 ************************************ 00:20:44.084 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:20:44.084 * Looking for test storage... 00:20:44.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:20:44.084 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:44.084 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:20:44.084 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:20:44.344 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:44.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.345 --rc genhtml_branch_coverage=1 00:20:44.345 --rc genhtml_function_coverage=1 00:20:44.345 --rc genhtml_legend=1 00:20:44.345 --rc geninfo_all_blocks=1 00:20:44.345 --rc geninfo_unexecuted_blocks=1 00:20:44.345 00:20:44.345 ' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:44.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.345 --rc genhtml_branch_coverage=1 00:20:44.345 --rc genhtml_function_coverage=1 00:20:44.345 --rc genhtml_legend=1 00:20:44.345 --rc geninfo_all_blocks=1 00:20:44.345 --rc geninfo_unexecuted_blocks=1 00:20:44.345 00:20:44.345 ' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:44.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.345 --rc genhtml_branch_coverage=1 00:20:44.345 --rc genhtml_function_coverage=1 00:20:44.345 --rc genhtml_legend=1 00:20:44.345 --rc geninfo_all_blocks=1 00:20:44.345 --rc geninfo_unexecuted_blocks=1 00:20:44.345 00:20:44.345 ' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:44.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:44.345 --rc genhtml_branch_coverage=1 00:20:44.345 --rc genhtml_function_coverage=1 00:20:44.345 --rc genhtml_legend=1 00:20:44.345 --rc geninfo_all_blocks=1 00:20:44.345 --rc geninfo_unexecuted_blocks=1 00:20:44.345 00:20:44.345 ' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:44.345 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1324214 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1324214' 00:20:44.345 Process pid: 1324214 00:20:44.345 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1324214 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1324214 ']' 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:44.346 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.346 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:44.346 [2024-12-05 12:04:09.266838] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:20:44.346 [2024-12-05 12:04:09.266913] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.346 [2024-12-05 12:04:09.352826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:44.346 [2024-12-05 12:04:09.387089] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.346 [2024-12-05 12:04:09.387122] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.346 [2024-12-05 12:04:09.387128] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.346 [2024-12-05 12:04:09.387133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.346 [2024-12-05 12:04:09.387137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.346 [2024-12-05 12:04:09.388340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:44.346 [2024-12-05 12:04:09.388506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:44.346 [2024-12-05 12:04:09.388511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:45.353 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:45.353 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:20:45.353 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:46.295 malloc0 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.295 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:20:46.295 00:20:46.295 00:20:46.295 CUnit - A unit testing framework for C - Version 2.1-3 00:20:46.295 http://cunit.sourceforge.net/ 00:20:46.295 00:20:46.295 00:20:46.295 Suite: nvme_compliance 00:20:46.295 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-05 12:04:11.322853] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:46.296 [2024-12-05 12:04:11.324156] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:20:46.296 [2024-12-05 12:04:11.324168] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:20:46.296 [2024-12-05 12:04:11.324173] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:20:46.296 [2024-12-05 12:04:11.326879] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.556 passed 00:20:46.556 Test: admin_identify_ctrlr_verify_fused ...[2024-12-05 12:04:11.403382] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:46.556 [2024-12-05 12:04:11.406397] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.556 passed 00:20:46.556 Test: admin_identify_ns ...[2024-12-05 12:04:11.483804] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:46.556 [2024-12-05 12:04:11.543464] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:20:46.556 [2024-12-05 12:04:11.551467] ctrlr.c:2752:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:20:46.556 [2024-12-05 12:04:11.572545] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.556 passed 00:20:46.818 Test: admin_get_features_mandatory_features ...[2024-12-05 12:04:11.650583] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:46.818 [2024-12-05 12:04:11.653607] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.818 passed 00:20:46.818 Test: admin_get_features_optional_features ...[2024-12-05 12:04:11.732075] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:46.818 [2024-12-05 12:04:11.735091] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:46.818 passed 00:20:46.818 Test: admin_set_features_number_of_queues ...[2024-12-05 12:04:11.809810] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.079 [2024-12-05 12:04:11.918550] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.079 passed 00:20:47.079 Test: admin_get_log_page_mandatory_logs ...[2024-12-05 12:04:11.993812] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.079 [2024-12-05 12:04:11.996830] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.079 passed 00:20:47.079 Test: admin_get_log_page_with_lpo ...[2024-12-05 12:04:12.075766] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.340 [2024-12-05 12:04:12.143465] ctrlr.c:2699:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:20:47.340 [2024-12-05 12:04:12.156507] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.340 passed 00:20:47.340 Test: fabric_property_get ...[2024-12-05 12:04:12.232743] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.340 [2024-12-05 12:04:12.233950] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:20:47.340 [2024-12-05 12:04:12.235766] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.340 passed 00:20:47.340 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-05 12:04:12.312243] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.340 [2024-12-05 12:04:12.313442] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:20:47.340 [2024-12-05 12:04:12.315262] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.340 passed 00:20:47.601 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-05 12:04:12.392030] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.601 [2024-12-05 12:04:12.476464] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:47.601 [2024-12-05 12:04:12.492461] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:47.601 [2024-12-05 12:04:12.497534] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.601 passed 00:20:47.601 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-05 12:04:12.571761] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.601 [2024-12-05 12:04:12.572956] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:20:47.601 [2024-12-05 12:04:12.574781] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.601 passed 00:20:47.861 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-05 12:04:12.651528] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.861 [2024-12-05 12:04:12.729462] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:47.861 [2024-12-05 12:04:12.753460] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:20:47.861 [2024-12-05 12:04:12.758523] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.861 passed 00:20:47.861 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-05 12:04:12.832726] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:47.861 [2024-12-05 12:04:12.833936] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:20:47.861 [2024-12-05 12:04:12.833954] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:20:47.861 [2024-12-05 12:04:12.835749] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:47.861 passed 00:20:47.861 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-05 12:04:12.910482] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:48.121 [2024-12-05 12:04:13.003458] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:20:48.121 [2024-12-05 12:04:13.011458] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:20:48.122 [2024-12-05 12:04:13.019460] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:20:48.122 [2024-12-05 12:04:13.027459] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:20:48.122 [2024-12-05 12:04:13.056527] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:48.122 passed 00:20:48.122 Test: admin_create_io_sq_verify_pc ...[2024-12-05 12:04:13.129716] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:48.122 [2024-12-05 12:04:13.146466] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:20:48.122 [2024-12-05 12:04:13.163887] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:48.382 passed 00:20:48.382 Test: admin_create_io_qp_max_qps ...[2024-12-05 12:04:13.242376] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:49.324 [2024-12-05 12:04:14.353466] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:20:49.895 [2024-12-05 12:04:14.745872] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:49.895 passed 00:20:49.895 Test: admin_create_io_sq_shared_cq ...[2024-12-05 12:04:14.825755] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:20:50.155 [2024-12-05 12:04:14.957462] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:20:50.155 [2024-12-05 12:04:14.994505] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:20:50.155 passed 00:20:50.155 00:20:50.155 Run Summary: Type Total Ran Passed Failed Inactive 00:20:50.155 suites 1 1 n/a 0 0 00:20:50.155 tests 18 18 18 0 0 00:20:50.155 asserts 360 360 360 0 n/a 00:20:50.155 00:20:50.155 Elapsed time = 1.514 seconds 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1324214 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1324214 ']' 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1324214 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1324214 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1324214' 00:20:50.155 killing process with pid 1324214 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1324214 00:20:50.155 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1324214 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:20:50.415 00:20:50.415 real 0m6.246s 00:20:50.415 user 0m17.735s 00:20:50.415 sys 0m0.517s 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:20:50.415 ************************************ 00:20:50.415 END TEST nvmf_vfio_user_nvme_compliance 00:20:50.415 ************************************ 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.415 ************************************ 00:20:50.415 START TEST nvmf_vfio_user_fuzz 00:20:50.415 ************************************ 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:20:50.415 * Looking for test storage... 00:20:50.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:20:50.415 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.677 --rc genhtml_branch_coverage=1 00:20:50.677 --rc genhtml_function_coverage=1 00:20:50.677 --rc genhtml_legend=1 00:20:50.677 --rc geninfo_all_blocks=1 00:20:50.677 --rc geninfo_unexecuted_blocks=1 00:20:50.677 00:20:50.677 ' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.677 --rc genhtml_branch_coverage=1 00:20:50.677 --rc genhtml_function_coverage=1 00:20:50.677 --rc genhtml_legend=1 00:20:50.677 --rc geninfo_all_blocks=1 00:20:50.677 --rc geninfo_unexecuted_blocks=1 00:20:50.677 00:20:50.677 ' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.677 --rc genhtml_branch_coverage=1 00:20:50.677 --rc genhtml_function_coverage=1 00:20:50.677 --rc genhtml_legend=1 00:20:50.677 --rc geninfo_all_blocks=1 00:20:50.677 --rc geninfo_unexecuted_blocks=1 00:20:50.677 00:20:50.677 ' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:50.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:50.677 --rc genhtml_branch_coverage=1 00:20:50.677 --rc genhtml_function_coverage=1 00:20:50.677 --rc genhtml_legend=1 00:20:50.677 --rc geninfo_all_blocks=1 00:20:50.677 --rc geninfo_unexecuted_blocks=1 00:20:50.677 00:20:50.677 ' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:50.677 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:50.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1325343 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1325343' 00:20:50.678 Process pid: 1325343 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1325343 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1325343 ']' 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:50.678 12:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:51.619 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.619 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:20:51.619 12:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 malloc0 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:20:52.561 12:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:21:24.673 Fuzzing completed. Shutting down the fuzz application 00:21:24.673 00:21:24.673 Dumping successful admin opcodes: 00:21:24.673 9, 10, 00:21:24.673 Dumping successful io opcodes: 00:21:24.673 0, 00:21:24.673 NS: 0x20000081ef00 I/O qp, Total commands completed: 1398310, total successful commands: 5485, random_seed: 2986315712 00:21:24.673 NS: 0x20000081ef00 admin qp, Total commands completed: 346432, total successful commands: 93, random_seed: 4145368448 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1325343 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1325343 ']' 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1325343 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325343 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325343' 00:21:24.673 killing process with pid 1325343 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1325343 00:21:24.673 12:04:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1325343 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:21:24.673 00:21:24.673 real 0m32.892s 00:21:24.673 user 0m37.931s 00:21:24.673 sys 0m23.810s 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:21:24.673 ************************************ 00:21:24.673 END TEST nvmf_vfio_user_fuzz 00:21:24.673 ************************************ 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:24.673 ************************************ 00:21:24.673 START TEST nvmf_auth_target 00:21:24.673 ************************************ 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:21:24.673 * Looking for test storage... 00:21:24.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:21:24.673 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:21:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.674 --rc genhtml_branch_coverage=1 00:21:24.674 --rc genhtml_function_coverage=1 00:21:24.674 --rc genhtml_legend=1 00:21:24.674 --rc geninfo_all_blocks=1 00:21:24.674 --rc geninfo_unexecuted_blocks=1 00:21:24.674 00:21:24.674 ' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:21:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.674 --rc genhtml_branch_coverage=1 00:21:24.674 --rc genhtml_function_coverage=1 00:21:24.674 --rc genhtml_legend=1 00:21:24.674 --rc geninfo_all_blocks=1 00:21:24.674 --rc geninfo_unexecuted_blocks=1 00:21:24.674 00:21:24.674 ' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:21:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.674 --rc genhtml_branch_coverage=1 00:21:24.674 --rc genhtml_function_coverage=1 00:21:24.674 --rc genhtml_legend=1 00:21:24.674 --rc geninfo_all_blocks=1 00:21:24.674 --rc geninfo_unexecuted_blocks=1 00:21:24.674 00:21:24.674 ' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:21:24.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.674 --rc genhtml_branch_coverage=1 00:21:24.674 --rc genhtml_function_coverage=1 00:21:24.674 --rc genhtml_legend=1 00:21:24.674 --rc geninfo_all_blocks=1 00:21:24.674 --rc geninfo_unexecuted_blocks=1 00:21:24.674 00:21:24.674 ' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:24.674 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:21:24.674 12:04:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.280 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:31.281 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:31.281 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:31.281 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:31.281 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@247 -- # create_target_ns 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:31.281 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:31.282 10.0.0.1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:31.282 10.0.0.2 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:31.282 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:21:31.283 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:31.283 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:31.283 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:31.283 12:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:31.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:31.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.650 ms 00:21:31.283 00:21:31.283 --- 10.0.0.1 ping statistics --- 00:21:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.283 rtt min/avg/max/mdev = 0.650/0.650/0.650/0.000 ms 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:21:31.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:31.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:21:31.283 00:21:31.283 --- 10.0.0.2 ping statistics --- 00:21:31.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:31.283 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:21:31.283 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:21:31.284 ' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1335646 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1335646 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1335646 ']' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.284 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1335681 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=b0861ebd302805e451f44ef8cf7fa12488c09ebd9c95f35e 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.rsA 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key b0861ebd302805e451f44ef8cf7fa12488c09ebd9c95f35e 0 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 b0861ebd302805e451f44ef8cf7fa12488c09ebd9c95f35e 0 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=b0861ebd302805e451f44ef8cf7fa12488c09ebd9c95f35e 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.rsA 00:21:32.228 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.rsA 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.rsA 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=64c7fcc5e6b2c332edec64232f7ad90ba2b0ac62d35e9e8d2c0b41f0e04a385d 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.AUS 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 64c7fcc5e6b2c332edec64232f7ad90ba2b0ac62d35e9e8d2c0b41f0e04a385d 3 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 64c7fcc5e6b2c332edec64232f7ad90ba2b0ac62d35e9e8d2c0b41f0e04a385d 3 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=64c7fcc5e6b2c332edec64232f7ad90ba2b0ac62d35e9e8d2c0b41f0e04a385d 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.AUS 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.AUS 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.AUS 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=546e46f25b9a0227b6ebc2704d278972 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.dkV 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 546e46f25b9a0227b6ebc2704d278972 1 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 546e46f25b9a0227b6ebc2704d278972 1 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=546e46f25b9a0227b6ebc2704d278972 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:21:32.229 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.dkV 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.dkV 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.dkV 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=107cbf28ec23e25b3b4077c17ba63902fbf975b0d0bee85d 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.gIz 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 107cbf28ec23e25b3b4077c17ba63902fbf975b0d0bee85d 2 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 107cbf28ec23e25b3b4077c17ba63902fbf975b0d0bee85d 2 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=107cbf28ec23e25b3b4077c17ba63902fbf975b0d0bee85d 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.gIz 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.gIz 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.gIz 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=d85110195f4544ef5cbf1d2abf2582d4612b4f0e754e3171 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.eog 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key d85110195f4544ef5cbf1d2abf2582d4612b4f0e754e3171 2 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 d85110195f4544ef5cbf1d2abf2582d4612b4f0e754e3171 2 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=d85110195f4544ef5cbf1d2abf2582d4612b4f0e754e3171 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.eog 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.eog 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.eog 00:21:32.491 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=1ad2d43c34a95a1e215b19a20e9a1189 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.pVe 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 1ad2d43c34a95a1e215b19a20e9a1189 1 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 1ad2d43c34a95a1e215b19a20e9a1189 1 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=1ad2d43c34a95a1e215b19a20e9a1189 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.pVe 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.pVe 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.pVe 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=4222b763bc1c54ca640c42f164bddd829e83f2683b8a805c234b21c5b2d265a9 00:21:32.492 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Or6 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 4222b763bc1c54ca640c42f164bddd829e83f2683b8a805c234b21c5b2d265a9 3 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 4222b763bc1c54ca640c42f164bddd829e83f2683b8a805c234b21c5b2d265a9 3 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=4222b763bc1c54ca640c42f164bddd829e83f2683b8a805c234b21c5b2d265a9 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Or6 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Or6 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.Or6 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1335646 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1335646 ']' 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:32.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:32.753 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1335681 /var/tmp/host.sock 00:21:33.013 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1335681 ']' 00:21:33.013 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:21:33.013 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.013 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:21:33.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:21:33.013 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.013 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:33.013 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rsA 00:21:33.014 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.014 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.014 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.014 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.rsA 00:21:33.014 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.rsA 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.AUS ]] 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUS 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUS 00:21:33.274 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUS 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dkV 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.dkV 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.dkV 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.gIz ]] 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gIz 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.535 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gIz 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gIz 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eog 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.eog 00:21:33.796 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.eog 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.pVe ]] 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pVe 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pVe 00:21:34.056 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pVe 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Or6 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Or6 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Or6 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:34.316 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.576 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:34.835 00:21:34.835 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.835 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.835 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.094 { 00:21:35.094 "cntlid": 1, 00:21:35.094 "qid": 0, 00:21:35.094 "state": "enabled", 00:21:35.094 "thread": "nvmf_tgt_poll_group_000", 00:21:35.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:35.094 "listen_address": { 00:21:35.094 "trtype": "TCP", 00:21:35.094 "adrfam": "IPv4", 00:21:35.094 "traddr": "10.0.0.2", 00:21:35.094 "trsvcid": "4420" 00:21:35.094 }, 00:21:35.094 "peer_address": { 00:21:35.094 "trtype": "TCP", 00:21:35.094 "adrfam": "IPv4", 00:21:35.094 "traddr": "10.0.0.1", 00:21:35.094 "trsvcid": "40958" 00:21:35.094 }, 00:21:35.094 "auth": { 00:21:35.094 "state": "completed", 00:21:35.094 "digest": "sha256", 00:21:35.094 "dhgroup": "null" 00:21:35.094 } 00:21:35.094 } 00:21:35.094 ]' 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.094 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:35.095 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.354 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.354 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.354 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.354 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:35.354 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:35.924 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.924 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:36.185 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.185 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.185 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.185 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.185 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.185 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.185 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:36.446 00:21:36.446 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.446 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.446 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.706 { 00:21:36.706 "cntlid": 3, 00:21:36.706 "qid": 0, 00:21:36.706 "state": "enabled", 00:21:36.706 "thread": "nvmf_tgt_poll_group_000", 00:21:36.706 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:36.706 "listen_address": { 00:21:36.706 "trtype": "TCP", 00:21:36.706 "adrfam": "IPv4", 00:21:36.706 "traddr": "10.0.0.2", 00:21:36.706 "trsvcid": "4420" 00:21:36.706 }, 00:21:36.706 "peer_address": { 00:21:36.706 "trtype": "TCP", 00:21:36.706 "adrfam": "IPv4", 00:21:36.706 "traddr": "10.0.0.1", 00:21:36.706 "trsvcid": "40986" 00:21:36.706 }, 00:21:36.706 "auth": { 00:21:36.706 "state": "completed", 00:21:36.706 "digest": "sha256", 00:21:36.706 "dhgroup": "null" 00:21:36.706 } 00:21:36.706 } 00:21:36.706 ]' 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.706 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.965 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:36.965 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:37.532 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.792 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:38.052 00:21:38.052 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.052 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.052 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.312 { 00:21:38.312 "cntlid": 5, 00:21:38.312 "qid": 0, 00:21:38.312 "state": "enabled", 00:21:38.312 "thread": "nvmf_tgt_poll_group_000", 00:21:38.312 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:38.312 "listen_address": { 00:21:38.312 "trtype": "TCP", 00:21:38.312 "adrfam": "IPv4", 00:21:38.312 "traddr": "10.0.0.2", 00:21:38.312 "trsvcid": "4420" 00:21:38.312 }, 00:21:38.312 "peer_address": { 00:21:38.312 "trtype": "TCP", 00:21:38.312 "adrfam": "IPv4", 00:21:38.312 "traddr": "10.0.0.1", 00:21:38.312 "trsvcid": "41006" 00:21:38.312 }, 00:21:38.312 "auth": { 00:21:38.312 "state": "completed", 00:21:38.312 "digest": "sha256", 00:21:38.312 "dhgroup": "null" 00:21:38.312 } 00:21:38.312 } 00:21:38.312 ]' 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.312 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.572 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:38.572 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:39.141 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.400 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:39.660 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.660 { 00:21:39.660 "cntlid": 7, 00:21:39.660 "qid": 0, 00:21:39.660 "state": "enabled", 00:21:39.660 "thread": "nvmf_tgt_poll_group_000", 00:21:39.660 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:39.660 "listen_address": { 00:21:39.660 "trtype": "TCP", 00:21:39.660 "adrfam": "IPv4", 00:21:39.660 "traddr": "10.0.0.2", 00:21:39.660 "trsvcid": "4420" 00:21:39.660 }, 00:21:39.660 "peer_address": { 00:21:39.660 "trtype": "TCP", 00:21:39.660 "adrfam": "IPv4", 00:21:39.660 "traddr": "10.0.0.1", 00:21:39.660 "trsvcid": "41044" 00:21:39.660 }, 00:21:39.660 "auth": { 00:21:39.660 "state": "completed", 00:21:39.660 "digest": "sha256", 00:21:39.660 "dhgroup": "null" 00:21:39.660 } 00:21:39.660 } 00:21:39.660 ]' 00:21:39.660 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.920 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.182 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:40.182 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.754 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.013 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.013 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.013 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.013 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:41.013 00:21:41.013 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.013 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.013 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.273 { 00:21:41.273 "cntlid": 9, 00:21:41.273 "qid": 0, 00:21:41.273 "state": "enabled", 00:21:41.273 "thread": "nvmf_tgt_poll_group_000", 00:21:41.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:41.273 "listen_address": { 00:21:41.273 "trtype": "TCP", 00:21:41.273 "adrfam": "IPv4", 00:21:41.273 "traddr": "10.0.0.2", 00:21:41.273 "trsvcid": "4420" 00:21:41.273 }, 00:21:41.273 "peer_address": { 00:21:41.273 "trtype": "TCP", 00:21:41.273 "adrfam": "IPv4", 00:21:41.273 "traddr": "10.0.0.1", 00:21:41.273 "trsvcid": "41070" 00:21:41.273 }, 00:21:41.273 "auth": { 00:21:41.273 "state": "completed", 00:21:41.273 "digest": "sha256", 00:21:41.273 "dhgroup": "ffdhe2048" 00:21:41.273 } 00:21:41.273 } 00:21:41.273 ]' 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:41.273 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:41.532 12:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:42.121 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.121 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:42.121 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.121 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.121 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.121 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.122 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.122 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.381 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:42.642 00:21:42.642 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.642 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.642 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.903 { 00:21:42.903 "cntlid": 11, 00:21:42.903 "qid": 0, 00:21:42.903 "state": "enabled", 00:21:42.903 "thread": "nvmf_tgt_poll_group_000", 00:21:42.903 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:42.903 "listen_address": { 00:21:42.903 "trtype": "TCP", 00:21:42.903 "adrfam": "IPv4", 00:21:42.903 "traddr": "10.0.0.2", 00:21:42.903 "trsvcid": "4420" 00:21:42.903 }, 00:21:42.903 "peer_address": { 00:21:42.903 "trtype": "TCP", 00:21:42.903 "adrfam": "IPv4", 00:21:42.903 "traddr": "10.0.0.1", 00:21:42.903 "trsvcid": "55816" 00:21:42.903 }, 00:21:42.903 "auth": { 00:21:42.903 "state": "completed", 00:21:42.903 "digest": "sha256", 00:21:42.903 "dhgroup": "ffdhe2048" 00:21:42.903 } 00:21:42.903 } 00:21:42.903 ]' 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.903 12:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.162 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:43.162 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.730 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.991 12:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:44.252 00:21:44.252 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.252 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.252 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.513 { 00:21:44.513 "cntlid": 13, 00:21:44.513 "qid": 0, 00:21:44.513 "state": "enabled", 00:21:44.513 "thread": "nvmf_tgt_poll_group_000", 00:21:44.513 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:44.513 "listen_address": { 00:21:44.513 "trtype": "TCP", 00:21:44.513 "adrfam": "IPv4", 00:21:44.513 "traddr": "10.0.0.2", 00:21:44.513 "trsvcid": "4420" 00:21:44.513 }, 00:21:44.513 "peer_address": { 00:21:44.513 "trtype": "TCP", 00:21:44.513 "adrfam": "IPv4", 00:21:44.513 "traddr": "10.0.0.1", 00:21:44.513 "trsvcid": "55850" 00:21:44.513 }, 00:21:44.513 "auth": { 00:21:44.513 "state": "completed", 00:21:44.513 "digest": "sha256", 00:21:44.513 "dhgroup": "ffdhe2048" 00:21:44.513 } 00:21:44.513 } 00:21:44.513 ]' 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.513 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.775 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:44.775 12:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:45.344 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.603 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.863 00:21:45.863 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.863 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.863 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.124 { 00:21:46.124 "cntlid": 15, 00:21:46.124 "qid": 0, 00:21:46.124 "state": "enabled", 00:21:46.124 "thread": "nvmf_tgt_poll_group_000", 00:21:46.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:46.124 "listen_address": { 00:21:46.124 "trtype": "TCP", 00:21:46.124 "adrfam": "IPv4", 00:21:46.124 "traddr": "10.0.0.2", 00:21:46.124 "trsvcid": "4420" 00:21:46.124 }, 00:21:46.124 "peer_address": { 00:21:46.124 "trtype": "TCP", 00:21:46.124 "adrfam": "IPv4", 00:21:46.124 "traddr": "10.0.0.1", 00:21:46.124 "trsvcid": "55874" 00:21:46.124 }, 00:21:46.124 "auth": { 00:21:46.124 "state": "completed", 00:21:46.124 "digest": "sha256", 00:21:46.124 "dhgroup": "ffdhe2048" 00:21:46.124 } 00:21:46.124 } 00:21:46.124 ]' 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:46.124 12:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.124 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:46.124 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.124 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.124 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.124 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.384 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:46.384 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:46.953 12:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.212 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.472 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.472 { 00:21:47.472 "cntlid": 17, 00:21:47.472 "qid": 0, 00:21:47.472 "state": "enabled", 00:21:47.472 "thread": "nvmf_tgt_poll_group_000", 00:21:47.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:47.472 "listen_address": { 00:21:47.472 "trtype": "TCP", 00:21:47.472 "adrfam": "IPv4", 00:21:47.472 "traddr": "10.0.0.2", 00:21:47.472 "trsvcid": "4420" 00:21:47.472 }, 00:21:47.472 "peer_address": { 00:21:47.472 "trtype": "TCP", 00:21:47.472 "adrfam": "IPv4", 00:21:47.472 "traddr": "10.0.0.1", 00:21:47.472 "trsvcid": "55902" 00:21:47.472 }, 00:21:47.472 "auth": { 00:21:47.472 "state": "completed", 00:21:47.472 "digest": "sha256", 00:21:47.472 "dhgroup": "ffdhe3072" 00:21:47.472 } 00:21:47.472 } 00:21:47.472 ]' 00:21:47.472 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.731 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.992 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:47.992 12:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.562 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.822 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:48.822 00:21:49.094 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.094 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.094 12:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.094 { 00:21:49.094 "cntlid": 19, 00:21:49.094 "qid": 0, 00:21:49.094 "state": "enabled", 00:21:49.094 "thread": "nvmf_tgt_poll_group_000", 00:21:49.094 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:49.094 "listen_address": { 00:21:49.094 "trtype": "TCP", 00:21:49.094 "adrfam": "IPv4", 00:21:49.094 "traddr": "10.0.0.2", 00:21:49.094 "trsvcid": "4420" 00:21:49.094 }, 00:21:49.094 "peer_address": { 00:21:49.094 "trtype": "TCP", 00:21:49.094 "adrfam": "IPv4", 00:21:49.094 "traddr": "10.0.0.1", 00:21:49.094 "trsvcid": "55936" 00:21:49.094 }, 00:21:49.094 "auth": { 00:21:49.094 "state": "completed", 00:21:49.094 "digest": "sha256", 00:21:49.094 "dhgroup": "ffdhe3072" 00:21:49.094 } 00:21:49.094 } 00:21:49.094 ]' 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:49.094 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:49.357 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:49.925 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.185 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:50.185 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.185 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.185 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.185 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.186 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.186 12:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.186 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:50.445 00:21:50.445 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.445 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.445 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.705 { 00:21:50.705 "cntlid": 21, 00:21:50.705 "qid": 0, 00:21:50.705 "state": "enabled", 00:21:50.705 "thread": "nvmf_tgt_poll_group_000", 00:21:50.705 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:50.705 "listen_address": { 00:21:50.705 "trtype": "TCP", 00:21:50.705 "adrfam": "IPv4", 00:21:50.705 "traddr": "10.0.0.2", 00:21:50.705 "trsvcid": "4420" 00:21:50.705 }, 00:21:50.705 "peer_address": { 00:21:50.705 "trtype": "TCP", 00:21:50.705 "adrfam": "IPv4", 00:21:50.705 "traddr": "10.0.0.1", 00:21:50.705 "trsvcid": "55976" 00:21:50.705 }, 00:21:50.705 "auth": { 00:21:50.705 "state": "completed", 00:21:50.705 "digest": "sha256", 00:21:50.705 "dhgroup": "ffdhe3072" 00:21:50.705 } 00:21:50.705 } 00:21:50.705 ]' 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:50.705 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.965 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.965 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.965 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.965 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:50.965 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:51.536 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.536 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.536 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:51.536 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.536 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:51.796 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:51.797 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:52.056 00:21:52.056 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.056 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.056 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.315 { 00:21:52.315 "cntlid": 23, 00:21:52.315 "qid": 0, 00:21:52.315 "state": "enabled", 00:21:52.315 "thread": "nvmf_tgt_poll_group_000", 00:21:52.315 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:52.315 "listen_address": { 00:21:52.315 "trtype": "TCP", 00:21:52.315 "adrfam": "IPv4", 00:21:52.315 "traddr": "10.0.0.2", 00:21:52.315 "trsvcid": "4420" 00:21:52.315 }, 00:21:52.315 "peer_address": { 00:21:52.315 "trtype": "TCP", 00:21:52.315 "adrfam": "IPv4", 00:21:52.315 "traddr": "10.0.0.1", 00:21:52.315 "trsvcid": "41072" 00:21:52.315 }, 00:21:52.315 "auth": { 00:21:52.315 "state": "completed", 00:21:52.315 "digest": "sha256", 00:21:52.315 "dhgroup": "ffdhe3072" 00:21:52.315 } 00:21:52.315 } 00:21:52.315 ]' 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.315 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.574 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:52.574 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.145 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.145 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.404 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:53.664 00:21:53.664 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.664 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.664 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.929 { 00:21:53.929 "cntlid": 25, 00:21:53.929 "qid": 0, 00:21:53.929 "state": "enabled", 00:21:53.929 "thread": "nvmf_tgt_poll_group_000", 00:21:53.929 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:53.929 "listen_address": { 00:21:53.929 "trtype": "TCP", 00:21:53.929 "adrfam": "IPv4", 00:21:53.929 "traddr": "10.0.0.2", 00:21:53.929 "trsvcid": "4420" 00:21:53.929 }, 00:21:53.929 "peer_address": { 00:21:53.929 "trtype": "TCP", 00:21:53.929 "adrfam": "IPv4", 00:21:53.929 "traddr": "10.0.0.1", 00:21:53.929 "trsvcid": "41088" 00:21:53.929 }, 00:21:53.929 "auth": { 00:21:53.929 "state": "completed", 00:21:53.929 "digest": "sha256", 00:21:53.929 "dhgroup": "ffdhe4096" 00:21:53.929 } 00:21:53.929 } 00:21:53.929 ]' 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.929 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.209 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:54.209 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:54.831 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.091 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:55.351 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.351 { 00:21:55.351 "cntlid": 27, 00:21:55.351 "qid": 0, 00:21:55.351 "state": "enabled", 00:21:55.351 "thread": "nvmf_tgt_poll_group_000", 00:21:55.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:55.351 "listen_address": { 00:21:55.351 "trtype": "TCP", 00:21:55.351 "adrfam": "IPv4", 00:21:55.351 "traddr": "10.0.0.2", 00:21:55.351 "trsvcid": "4420" 00:21:55.351 }, 00:21:55.351 "peer_address": { 00:21:55.351 "trtype": "TCP", 00:21:55.351 "adrfam": "IPv4", 00:21:55.351 "traddr": "10.0.0.1", 00:21:55.351 "trsvcid": "41116" 00:21:55.351 }, 00:21:55.351 "auth": { 00:21:55.351 "state": "completed", 00:21:55.351 "digest": "sha256", 00:21:55.351 "dhgroup": "ffdhe4096" 00:21:55.351 } 00:21:55.351 } 00:21:55.351 ]' 00:21:55.351 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.610 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.870 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:55.870 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:56.441 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.702 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.962 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.962 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.962 { 00:21:56.962 "cntlid": 29, 00:21:56.962 "qid": 0, 00:21:56.962 "state": "enabled", 00:21:56.962 "thread": "nvmf_tgt_poll_group_000", 00:21:56.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:56.962 "listen_address": { 00:21:56.962 "trtype": "TCP", 00:21:56.962 "adrfam": "IPv4", 00:21:56.962 "traddr": "10.0.0.2", 00:21:56.962 "trsvcid": "4420" 00:21:56.962 }, 00:21:56.962 "peer_address": { 00:21:56.962 "trtype": "TCP", 00:21:56.962 "adrfam": "IPv4", 00:21:56.962 "traddr": "10.0.0.1", 00:21:56.962 "trsvcid": "41134" 00:21:56.962 }, 00:21:56.962 "auth": { 00:21:56.963 "state": "completed", 00:21:56.963 "digest": "sha256", 00:21:56.963 "dhgroup": "ffdhe4096" 00:21:56.963 } 00:21:56.963 } 00:21:56.963 ]' 00:21:56.963 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.222 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.481 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:57.481 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:58.052 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.052 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.313 00:21:58.313 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.313 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.313 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.574 { 00:21:58.574 "cntlid": 31, 00:21:58.574 "qid": 0, 00:21:58.574 "state": "enabled", 00:21:58.574 "thread": "nvmf_tgt_poll_group_000", 00:21:58.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:21:58.574 "listen_address": { 00:21:58.574 "trtype": "TCP", 00:21:58.574 "adrfam": "IPv4", 00:21:58.574 "traddr": "10.0.0.2", 00:21:58.574 "trsvcid": "4420" 00:21:58.574 }, 00:21:58.574 "peer_address": { 00:21:58.574 "trtype": "TCP", 00:21:58.574 "adrfam": "IPv4", 00:21:58.574 "traddr": "10.0.0.1", 00:21:58.574 "trsvcid": "41168" 00:21:58.574 }, 00:21:58.574 "auth": { 00:21:58.574 "state": "completed", 00:21:58.574 "digest": "sha256", 00:21:58.574 "dhgroup": "ffdhe4096" 00:21:58.574 } 00:21:58.574 } 00:21:58.574 ]' 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:58.574 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.834 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.834 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.834 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.834 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:58.834 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:59.403 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.662 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:59.923 00:21:59.923 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.183 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.183 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.183 { 00:22:00.183 "cntlid": 33, 00:22:00.183 "qid": 0, 00:22:00.183 "state": "enabled", 00:22:00.183 "thread": "nvmf_tgt_poll_group_000", 00:22:00.183 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:00.183 "listen_address": { 00:22:00.183 "trtype": "TCP", 00:22:00.183 "adrfam": "IPv4", 00:22:00.183 "traddr": "10.0.0.2", 00:22:00.183 "trsvcid": "4420" 00:22:00.183 }, 00:22:00.183 "peer_address": { 00:22:00.183 "trtype": "TCP", 00:22:00.183 "adrfam": "IPv4", 00:22:00.183 "traddr": "10.0.0.1", 00:22:00.183 "trsvcid": "41206" 00:22:00.183 }, 00:22:00.183 "auth": { 00:22:00.183 "state": "completed", 00:22:00.183 "digest": "sha256", 00:22:00.183 "dhgroup": "ffdhe6144" 00:22:00.183 } 00:22:00.183 } 00:22:00.183 ]' 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:00.183 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:00.443 12:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:01.011 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.270 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:01.529 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:01.790 { 00:22:01.790 "cntlid": 35, 00:22:01.790 "qid": 0, 00:22:01.790 "state": "enabled", 00:22:01.790 "thread": "nvmf_tgt_poll_group_000", 00:22:01.790 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:01.790 "listen_address": { 00:22:01.790 "trtype": "TCP", 00:22:01.790 "adrfam": "IPv4", 00:22:01.790 "traddr": "10.0.0.2", 00:22:01.790 "trsvcid": "4420" 00:22:01.790 }, 00:22:01.790 "peer_address": { 00:22:01.790 "trtype": "TCP", 00:22:01.790 "adrfam": "IPv4", 00:22:01.790 "traddr": "10.0.0.1", 00:22:01.790 "trsvcid": "41226" 00:22:01.790 }, 00:22:01.790 "auth": { 00:22:01.790 "state": "completed", 00:22:01.790 "digest": "sha256", 00:22:01.790 "dhgroup": "ffdhe6144" 00:22:01.790 } 00:22:01.790 } 00:22:01.790 ]' 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:01.790 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.051 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:02.051 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.051 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.051 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.051 12:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.311 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:02.311 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.882 12:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.452 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.452 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.452 { 00:22:03.452 "cntlid": 37, 00:22:03.452 "qid": 0, 00:22:03.452 "state": "enabled", 00:22:03.452 "thread": "nvmf_tgt_poll_group_000", 00:22:03.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:03.453 "listen_address": { 00:22:03.453 "trtype": "TCP", 00:22:03.453 "adrfam": "IPv4", 00:22:03.453 "traddr": "10.0.0.2", 00:22:03.453 "trsvcid": "4420" 00:22:03.453 }, 00:22:03.453 "peer_address": { 00:22:03.453 "trtype": "TCP", 00:22:03.453 "adrfam": "IPv4", 00:22:03.453 "traddr": "10.0.0.1", 00:22:03.453 "trsvcid": "47236" 00:22:03.453 }, 00:22:03.453 "auth": { 00:22:03.453 "state": "completed", 00:22:03.453 "digest": "sha256", 00:22:03.453 "dhgroup": "ffdhe6144" 00:22:03.453 } 00:22:03.453 } 00:22:03.453 ]' 00:22:03.453 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.453 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:03.453 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:03.715 12:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.656 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.656 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:04.657 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.657 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:04.916 00:22:04.916 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.916 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.916 12:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.175 { 00:22:05.175 "cntlid": 39, 00:22:05.175 "qid": 0, 00:22:05.175 "state": "enabled", 00:22:05.175 "thread": "nvmf_tgt_poll_group_000", 00:22:05.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:05.175 "listen_address": { 00:22:05.175 "trtype": "TCP", 00:22:05.175 "adrfam": "IPv4", 00:22:05.175 "traddr": "10.0.0.2", 00:22:05.175 "trsvcid": "4420" 00:22:05.175 }, 00:22:05.175 "peer_address": { 00:22:05.175 "trtype": "TCP", 00:22:05.175 "adrfam": "IPv4", 00:22:05.175 "traddr": "10.0.0.1", 00:22:05.175 "trsvcid": "47248" 00:22:05.175 }, 00:22:05.175 "auth": { 00:22:05.175 "state": "completed", 00:22:05.175 "digest": "sha256", 00:22:05.175 "dhgroup": "ffdhe6144" 00:22:05.175 } 00:22:05.175 } 00:22:05.175 ]' 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:05.175 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:05.436 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.436 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.436 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.436 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:05.436 12:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:06.006 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.007 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.266 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:06.834 00:22:06.834 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:06.834 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:06.834 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.093 { 00:22:07.093 "cntlid": 41, 00:22:07.093 "qid": 0, 00:22:07.093 "state": "enabled", 00:22:07.093 "thread": "nvmf_tgt_poll_group_000", 00:22:07.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:07.093 "listen_address": { 00:22:07.093 "trtype": "TCP", 00:22:07.093 "adrfam": "IPv4", 00:22:07.093 "traddr": "10.0.0.2", 00:22:07.093 "trsvcid": "4420" 00:22:07.093 }, 00:22:07.093 "peer_address": { 00:22:07.093 "trtype": "TCP", 00:22:07.093 "adrfam": "IPv4", 00:22:07.093 "traddr": "10.0.0.1", 00:22:07.093 "trsvcid": "47276" 00:22:07.093 }, 00:22:07.093 "auth": { 00:22:07.093 "state": "completed", 00:22:07.093 "digest": "sha256", 00:22:07.093 "dhgroup": "ffdhe8192" 00:22:07.093 } 00:22:07.093 } 00:22:07.093 ]' 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:07.093 12:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.093 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.093 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.093 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.093 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.093 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.353 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:07.353 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:07.921 12:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.182 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:08.752 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.752 { 00:22:08.752 "cntlid": 43, 00:22:08.752 "qid": 0, 00:22:08.752 "state": "enabled", 00:22:08.752 "thread": "nvmf_tgt_poll_group_000", 00:22:08.752 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:08.752 "listen_address": { 00:22:08.752 "trtype": "TCP", 00:22:08.752 "adrfam": "IPv4", 00:22:08.752 "traddr": "10.0.0.2", 00:22:08.752 "trsvcid": "4420" 00:22:08.752 }, 00:22:08.752 "peer_address": { 00:22:08.752 "trtype": "TCP", 00:22:08.752 "adrfam": "IPv4", 00:22:08.752 "traddr": "10.0.0.1", 00:22:08.752 "trsvcid": "47308" 00:22:08.752 }, 00:22:08.752 "auth": { 00:22:08.752 "state": "completed", 00:22:08.752 "digest": "sha256", 00:22:08.752 "dhgroup": "ffdhe8192" 00:22:08.752 } 00:22:08.752 } 00:22:08.752 ]' 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:08.752 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.012 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.012 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.012 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.012 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.012 12:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:09.012 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:09.012 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.948 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:09.948 12:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:10.535 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.535 { 00:22:10.535 "cntlid": 45, 00:22:10.535 "qid": 0, 00:22:10.535 "state": "enabled", 00:22:10.535 "thread": "nvmf_tgt_poll_group_000", 00:22:10.535 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:10.535 "listen_address": { 00:22:10.535 "trtype": "TCP", 00:22:10.535 "adrfam": "IPv4", 00:22:10.535 "traddr": "10.0.0.2", 00:22:10.535 "trsvcid": "4420" 00:22:10.535 }, 00:22:10.535 "peer_address": { 00:22:10.535 "trtype": "TCP", 00:22:10.535 "adrfam": "IPv4", 00:22:10.535 "traddr": "10.0.0.1", 00:22:10.535 "trsvcid": "47334" 00:22:10.535 }, 00:22:10.535 "auth": { 00:22:10.535 "state": "completed", 00:22:10.535 "digest": "sha256", 00:22:10.535 "dhgroup": "ffdhe8192" 00:22:10.535 } 00:22:10.535 } 00:22:10.535 ]' 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:10.535 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:10.796 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:11.364 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:11.623 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.196 00:22:12.196 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.196 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.196 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.457 { 00:22:12.457 "cntlid": 47, 00:22:12.457 "qid": 0, 00:22:12.457 "state": "enabled", 00:22:12.457 "thread": "nvmf_tgt_poll_group_000", 00:22:12.457 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:12.457 "listen_address": { 00:22:12.457 "trtype": "TCP", 00:22:12.457 "adrfam": "IPv4", 00:22:12.457 "traddr": "10.0.0.2", 00:22:12.457 "trsvcid": "4420" 00:22:12.457 }, 00:22:12.457 "peer_address": { 00:22:12.457 "trtype": "TCP", 00:22:12.457 "adrfam": "IPv4", 00:22:12.457 "traddr": "10.0.0.1", 00:22:12.457 "trsvcid": "36776" 00:22:12.457 }, 00:22:12.457 "auth": { 00:22:12.457 "state": "completed", 00:22:12.457 "digest": "sha256", 00:22:12.457 "dhgroup": "ffdhe8192" 00:22:12.457 } 00:22:12.457 } 00:22:12.457 ]' 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:12.457 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:12.718 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:12.718 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:13.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:13.288 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:13.547 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:22:13.547 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:13.547 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:13.547 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.548 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:13.548 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.809 { 00:22:13.809 "cntlid": 49, 00:22:13.809 "qid": 0, 00:22:13.809 "state": "enabled", 00:22:13.809 "thread": "nvmf_tgt_poll_group_000", 00:22:13.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:13.809 "listen_address": { 00:22:13.809 "trtype": "TCP", 00:22:13.809 "adrfam": "IPv4", 00:22:13.809 "traddr": "10.0.0.2", 00:22:13.809 "trsvcid": "4420" 00:22:13.809 }, 00:22:13.809 "peer_address": { 00:22:13.809 "trtype": "TCP", 00:22:13.809 "adrfam": "IPv4", 00:22:13.809 "traddr": "10.0.0.1", 00:22:13.809 "trsvcid": "36804" 00:22:13.809 }, 00:22:13.809 "auth": { 00:22:13.809 "state": "completed", 00:22:13.809 "digest": "sha384", 00:22:13.809 "dhgroup": "null" 00:22:13.809 } 00:22:13.809 } 00:22:13.809 ]' 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:13.809 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:14.071 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:14.071 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:14.071 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:14.071 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:14.071 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.333 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:14.333 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:14.903 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.163 00:22:15.163 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:15.163 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.163 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.423 { 00:22:15.423 "cntlid": 51, 00:22:15.423 "qid": 0, 00:22:15.423 "state": "enabled", 00:22:15.423 "thread": "nvmf_tgt_poll_group_000", 00:22:15.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:15.423 "listen_address": { 00:22:15.423 "trtype": "TCP", 00:22:15.423 "adrfam": "IPv4", 00:22:15.423 "traddr": "10.0.0.2", 00:22:15.423 "trsvcid": "4420" 00:22:15.423 }, 00:22:15.423 "peer_address": { 00:22:15.423 "trtype": "TCP", 00:22:15.423 "adrfam": "IPv4", 00:22:15.423 "traddr": "10.0.0.1", 00:22:15.423 "trsvcid": "36830" 00:22:15.423 }, 00:22:15.423 "auth": { 00:22:15.423 "state": "completed", 00:22:15.423 "digest": "sha384", 00:22:15.423 "dhgroup": "null" 00:22:15.423 } 00:22:15.423 } 00:22:15.423 ]' 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:15.423 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.683 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.683 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.683 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.683 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:15.683 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:16.256 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.518 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:16.778 00:22:16.778 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.778 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.778 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.038 { 00:22:17.038 "cntlid": 53, 00:22:17.038 "qid": 0, 00:22:17.038 "state": "enabled", 00:22:17.038 "thread": "nvmf_tgt_poll_group_000", 00:22:17.038 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:17.038 "listen_address": { 00:22:17.038 "trtype": "TCP", 00:22:17.038 "adrfam": "IPv4", 00:22:17.038 "traddr": "10.0.0.2", 00:22:17.038 "trsvcid": "4420" 00:22:17.038 }, 00:22:17.038 "peer_address": { 00:22:17.038 "trtype": "TCP", 00:22:17.038 "adrfam": "IPv4", 00:22:17.038 "traddr": "10.0.0.1", 00:22:17.038 "trsvcid": "36862" 00:22:17.038 }, 00:22:17.038 "auth": { 00:22:17.038 "state": "completed", 00:22:17.038 "digest": "sha384", 00:22:17.038 "dhgroup": "null" 00:22:17.038 } 00:22:17.038 } 00:22:17.038 ]' 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:17.038 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.038 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:17.038 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.038 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.038 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.038 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.296 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:17.296 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:17.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:17.866 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.126 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:18.386 00:22:18.386 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:18.386 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:18.386 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:18.646 { 00:22:18.646 "cntlid": 55, 00:22:18.646 "qid": 0, 00:22:18.646 "state": "enabled", 00:22:18.646 "thread": "nvmf_tgt_poll_group_000", 00:22:18.646 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:18.646 "listen_address": { 00:22:18.646 "trtype": "TCP", 00:22:18.646 "adrfam": "IPv4", 00:22:18.646 "traddr": "10.0.0.2", 00:22:18.646 "trsvcid": "4420" 00:22:18.646 }, 00:22:18.646 "peer_address": { 00:22:18.646 "trtype": "TCP", 00:22:18.646 "adrfam": "IPv4", 00:22:18.646 "traddr": "10.0.0.1", 00:22:18.646 "trsvcid": "36892" 00:22:18.646 }, 00:22:18.646 "auth": { 00:22:18.646 "state": "completed", 00:22:18.646 "digest": "sha384", 00:22:18.646 "dhgroup": "null" 00:22:18.646 } 00:22:18.646 } 00:22:18.646 ]' 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.646 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.906 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:18.906 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:19.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.473 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.732 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:19.992 00:22:19.992 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.992 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.992 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.992 { 00:22:19.992 "cntlid": 57, 00:22:19.992 "qid": 0, 00:22:19.992 "state": "enabled", 00:22:19.992 "thread": "nvmf_tgt_poll_group_000", 00:22:19.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:19.992 "listen_address": { 00:22:19.992 "trtype": "TCP", 00:22:19.992 "adrfam": "IPv4", 00:22:19.992 "traddr": "10.0.0.2", 00:22:19.992 "trsvcid": "4420" 00:22:19.992 }, 00:22:19.992 "peer_address": { 00:22:19.992 "trtype": "TCP", 00:22:19.992 "adrfam": "IPv4", 00:22:19.992 "traddr": "10.0.0.1", 00:22:19.992 "trsvcid": "36916" 00:22:19.992 }, 00:22:19.992 "auth": { 00:22:19.992 "state": "completed", 00:22:19.992 "digest": "sha384", 00:22:19.992 "dhgroup": "ffdhe2048" 00:22:19.992 } 00:22:19.992 } 00:22:19.992 ]' 00:22:19.992 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.252 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.512 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:20.512 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:21.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.082 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.341 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.342 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.342 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:21.342 00:22:21.342 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.342 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.342 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.601 { 00:22:21.601 "cntlid": 59, 00:22:21.601 "qid": 0, 00:22:21.601 "state": "enabled", 00:22:21.601 "thread": "nvmf_tgt_poll_group_000", 00:22:21.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:21.601 "listen_address": { 00:22:21.601 "trtype": "TCP", 00:22:21.601 "adrfam": "IPv4", 00:22:21.601 "traddr": "10.0.0.2", 00:22:21.601 "trsvcid": "4420" 00:22:21.601 }, 00:22:21.601 "peer_address": { 00:22:21.601 "trtype": "TCP", 00:22:21.601 "adrfam": "IPv4", 00:22:21.601 "traddr": "10.0.0.1", 00:22:21.601 "trsvcid": "36948" 00:22:21.601 }, 00:22:21.601 "auth": { 00:22:21.601 "state": "completed", 00:22:21.601 "digest": "sha384", 00:22:21.601 "dhgroup": "ffdhe2048" 00:22:21.601 } 00:22:21.601 } 00:22:21.601 ]' 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.601 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.861 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.861 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.861 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.861 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:21.861 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:21.861 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:22.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.432 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.693 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:22.964 00:22:22.964 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:22.964 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:22.964 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.225 { 00:22:23.225 "cntlid": 61, 00:22:23.225 "qid": 0, 00:22:23.225 "state": "enabled", 00:22:23.225 "thread": "nvmf_tgt_poll_group_000", 00:22:23.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:23.225 "listen_address": { 00:22:23.225 "trtype": "TCP", 00:22:23.225 "adrfam": "IPv4", 00:22:23.225 "traddr": "10.0.0.2", 00:22:23.225 "trsvcid": "4420" 00:22:23.225 }, 00:22:23.225 "peer_address": { 00:22:23.225 "trtype": "TCP", 00:22:23.225 "adrfam": "IPv4", 00:22:23.225 "traddr": "10.0.0.1", 00:22:23.225 "trsvcid": "33408" 00:22:23.225 }, 00:22:23.225 "auth": { 00:22:23.225 "state": "completed", 00:22:23.225 "digest": "sha384", 00:22:23.225 "dhgroup": "ffdhe2048" 00:22:23.225 } 00:22:23.225 } 00:22:23.225 ]' 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.225 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.485 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:23.485 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:24.057 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:24.058 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.058 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.319 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:24.320 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.320 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:24.581 00:22:24.581 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:24.581 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:24.581 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:24.842 { 00:22:24.842 "cntlid": 63, 00:22:24.842 "qid": 0, 00:22:24.842 "state": "enabled", 00:22:24.842 "thread": "nvmf_tgt_poll_group_000", 00:22:24.842 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:24.842 "listen_address": { 00:22:24.842 "trtype": "TCP", 00:22:24.842 "adrfam": "IPv4", 00:22:24.842 "traddr": "10.0.0.2", 00:22:24.842 "trsvcid": "4420" 00:22:24.842 }, 00:22:24.842 "peer_address": { 00:22:24.842 "trtype": "TCP", 00:22:24.842 "adrfam": "IPv4", 00:22:24.842 "traddr": "10.0.0.1", 00:22:24.842 "trsvcid": "33438" 00:22:24.842 }, 00:22:24.842 "auth": { 00:22:24.842 "state": "completed", 00:22:24.842 "digest": "sha384", 00:22:24.842 "dhgroup": "ffdhe2048" 00:22:24.842 } 00:22:24.842 } 00:22:24.842 ]' 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:24.842 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:25.103 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:25.103 12:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.675 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.936 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:26.198 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.198 { 00:22:26.198 "cntlid": 65, 00:22:26.198 "qid": 0, 00:22:26.198 "state": "enabled", 00:22:26.198 "thread": "nvmf_tgt_poll_group_000", 00:22:26.198 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:26.198 "listen_address": { 00:22:26.198 "trtype": "TCP", 00:22:26.198 "adrfam": "IPv4", 00:22:26.198 "traddr": "10.0.0.2", 00:22:26.198 "trsvcid": "4420" 00:22:26.198 }, 00:22:26.198 "peer_address": { 00:22:26.198 "trtype": "TCP", 00:22:26.198 "adrfam": "IPv4", 00:22:26.198 "traddr": "10.0.0.1", 00:22:26.198 "trsvcid": "33458" 00:22:26.198 }, 00:22:26.198 "auth": { 00:22:26.198 "state": "completed", 00:22:26.198 "digest": "sha384", 00:22:26.198 "dhgroup": "ffdhe3072" 00:22:26.198 } 00:22:26.198 } 00:22:26.198 ]' 00:22:26.198 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.460 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.720 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:26.720 12:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.291 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.552 00:22:27.552 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:27.552 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:27.552 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:27.812 { 00:22:27.812 "cntlid": 67, 00:22:27.812 "qid": 0, 00:22:27.812 "state": "enabled", 00:22:27.812 "thread": "nvmf_tgt_poll_group_000", 00:22:27.812 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:27.812 "listen_address": { 00:22:27.812 "trtype": "TCP", 00:22:27.812 "adrfam": "IPv4", 00:22:27.812 "traddr": "10.0.0.2", 00:22:27.812 "trsvcid": "4420" 00:22:27.812 }, 00:22:27.812 "peer_address": { 00:22:27.812 "trtype": "TCP", 00:22:27.812 "adrfam": "IPv4", 00:22:27.812 "traddr": "10.0.0.1", 00:22:27.812 "trsvcid": "33478" 00:22:27.812 }, 00:22:27.812 "auth": { 00:22:27.812 "state": "completed", 00:22:27.812 "digest": "sha384", 00:22:27.812 "dhgroup": "ffdhe3072" 00:22:27.812 } 00:22:27.812 } 00:22:27.812 ]' 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:27.812 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.072 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.072 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.072 12:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:28.072 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:28.072 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:28.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:28.643 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:28.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:28.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.643 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:28.904 12:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:29.164 00:22:29.164 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:29.165 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:29.165 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:29.425 { 00:22:29.425 "cntlid": 69, 00:22:29.425 "qid": 0, 00:22:29.425 "state": "enabled", 00:22:29.425 "thread": "nvmf_tgt_poll_group_000", 00:22:29.425 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:29.425 "listen_address": { 00:22:29.425 "trtype": "TCP", 00:22:29.425 "adrfam": "IPv4", 00:22:29.425 "traddr": "10.0.0.2", 00:22:29.425 "trsvcid": "4420" 00:22:29.425 }, 00:22:29.425 "peer_address": { 00:22:29.425 "trtype": "TCP", 00:22:29.425 "adrfam": "IPv4", 00:22:29.425 "traddr": "10.0.0.1", 00:22:29.425 "trsvcid": "33506" 00:22:29.425 }, 00:22:29.425 "auth": { 00:22:29.425 "state": "completed", 00:22:29.425 "digest": "sha384", 00:22:29.425 "dhgroup": "ffdhe3072" 00:22:29.425 } 00:22:29.425 } 00:22:29.425 ]' 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:29.425 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.685 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:29.685 12:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:30.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:30.253 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.512 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:30.771 00:22:30.771 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.771 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.771 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.049 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:31.049 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:31.049 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.049 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.049 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.049 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:31.049 { 00:22:31.049 "cntlid": 71, 00:22:31.049 "qid": 0, 00:22:31.050 "state": "enabled", 00:22:31.050 "thread": "nvmf_tgt_poll_group_000", 00:22:31.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:31.050 "listen_address": { 00:22:31.050 "trtype": "TCP", 00:22:31.050 "adrfam": "IPv4", 00:22:31.050 "traddr": "10.0.0.2", 00:22:31.050 "trsvcid": "4420" 00:22:31.050 }, 00:22:31.050 "peer_address": { 00:22:31.050 "trtype": "TCP", 00:22:31.050 "adrfam": "IPv4", 00:22:31.050 "traddr": "10.0.0.1", 00:22:31.050 "trsvcid": "33534" 00:22:31.050 }, 00:22:31.050 "auth": { 00:22:31.050 "state": "completed", 00:22:31.050 "digest": "sha384", 00:22:31.050 "dhgroup": "ffdhe3072" 00:22:31.050 } 00:22:31.050 } 00:22:31.050 ]' 00:22:31.050 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:31.050 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:31.050 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:31.050 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:31.050 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:31.051 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:31.051 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:31.051 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.317 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:31.317 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:31.885 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:31.885 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.144 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:32.409 00:22:32.409 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.409 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.409 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.725 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.725 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.726 { 00:22:32.726 "cntlid": 73, 00:22:32.726 "qid": 0, 00:22:32.726 "state": "enabled", 00:22:32.726 "thread": "nvmf_tgt_poll_group_000", 00:22:32.726 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:32.726 "listen_address": { 00:22:32.726 "trtype": "TCP", 00:22:32.726 "adrfam": "IPv4", 00:22:32.726 "traddr": "10.0.0.2", 00:22:32.726 "trsvcid": "4420" 00:22:32.726 }, 00:22:32.726 "peer_address": { 00:22:32.726 "trtype": "TCP", 00:22:32.726 "adrfam": "IPv4", 00:22:32.726 "traddr": "10.0.0.1", 00:22:32.726 "trsvcid": "35098" 00:22:32.726 }, 00:22:32.726 "auth": { 00:22:32.726 "state": "completed", 00:22:32.726 "digest": "sha384", 00:22:32.726 "dhgroup": "ffdhe4096" 00:22:32.726 } 00:22:32.726 } 00:22:32.726 ]' 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:32.726 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.094 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:33.094 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:33.356 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.356 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.616 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:33.876 00:22:33.876 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:33.876 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:33.876 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:34.136 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:34.136 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:34.136 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.136 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:34.136 { 00:22:34.136 "cntlid": 75, 00:22:34.136 "qid": 0, 00:22:34.136 "state": "enabled", 00:22:34.136 "thread": "nvmf_tgt_poll_group_000", 00:22:34.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:34.136 "listen_address": { 00:22:34.136 "trtype": "TCP", 00:22:34.136 "adrfam": "IPv4", 00:22:34.136 "traddr": "10.0.0.2", 00:22:34.136 "trsvcid": "4420" 00:22:34.136 }, 00:22:34.136 "peer_address": { 00:22:34.136 "trtype": "TCP", 00:22:34.136 "adrfam": "IPv4", 00:22:34.136 "traddr": "10.0.0.1", 00:22:34.136 "trsvcid": "35128" 00:22:34.136 }, 00:22:34.136 "auth": { 00:22:34.136 "state": "completed", 00:22:34.136 "digest": "sha384", 00:22:34.136 "dhgroup": "ffdhe4096" 00:22:34.136 } 00:22:34.136 } 00:22:34.136 ]' 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:34.136 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:34.397 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:34.397 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:34.968 12:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:35.229 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:22:35.229 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:35.229 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:35.229 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.230 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:35.491 00:22:35.491 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:35.491 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:35.491 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.753 { 00:22:35.753 "cntlid": 77, 00:22:35.753 "qid": 0, 00:22:35.753 "state": "enabled", 00:22:35.753 "thread": "nvmf_tgt_poll_group_000", 00:22:35.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:35.753 "listen_address": { 00:22:35.753 "trtype": "TCP", 00:22:35.753 "adrfam": "IPv4", 00:22:35.753 "traddr": "10.0.0.2", 00:22:35.753 "trsvcid": "4420" 00:22:35.753 }, 00:22:35.753 "peer_address": { 00:22:35.753 "trtype": "TCP", 00:22:35.753 "adrfam": "IPv4", 00:22:35.753 "traddr": "10.0.0.1", 00:22:35.753 "trsvcid": "35156" 00:22:35.753 }, 00:22:35.753 "auth": { 00:22:35.753 "state": "completed", 00:22:35.753 "digest": "sha384", 00:22:35.753 "dhgroup": "ffdhe4096" 00:22:35.753 } 00:22:35.753 } 00:22:35.753 ]' 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.753 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.754 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:36.015 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:36.015 12:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:36.586 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.587 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:36.587 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:22:36.845 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:22:36.845 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.845 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:36.845 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:36.845 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:36.846 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:37.106 00:22:37.106 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.106 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.106 12:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.106 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.366 { 00:22:37.366 "cntlid": 79, 00:22:37.366 "qid": 0, 00:22:37.366 "state": "enabled", 00:22:37.366 "thread": "nvmf_tgt_poll_group_000", 00:22:37.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:37.366 "listen_address": { 00:22:37.366 "trtype": "TCP", 00:22:37.366 "adrfam": "IPv4", 00:22:37.366 "traddr": "10.0.0.2", 00:22:37.366 "trsvcid": "4420" 00:22:37.366 }, 00:22:37.366 "peer_address": { 00:22:37.366 "trtype": "TCP", 00:22:37.366 "adrfam": "IPv4", 00:22:37.366 "traddr": "10.0.0.1", 00:22:37.366 "trsvcid": "35174" 00:22:37.366 }, 00:22:37.366 "auth": { 00:22:37.366 "state": "completed", 00:22:37.366 "digest": "sha384", 00:22:37.366 "dhgroup": "ffdhe4096" 00:22:37.366 } 00:22:37.366 } 00:22:37.366 ]' 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.366 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.626 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:37.626 12:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:38.194 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.455 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:38.715 00:22:38.715 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:38.715 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:38.715 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:38.976 { 00:22:38.976 "cntlid": 81, 00:22:38.976 "qid": 0, 00:22:38.976 "state": "enabled", 00:22:38.976 "thread": "nvmf_tgt_poll_group_000", 00:22:38.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:38.976 "listen_address": { 00:22:38.976 "trtype": "TCP", 00:22:38.976 "adrfam": "IPv4", 00:22:38.976 "traddr": "10.0.0.2", 00:22:38.976 "trsvcid": "4420" 00:22:38.976 }, 00:22:38.976 "peer_address": { 00:22:38.976 "trtype": "TCP", 00:22:38.976 "adrfam": "IPv4", 00:22:38.976 "traddr": "10.0.0.1", 00:22:38.976 "trsvcid": "35210" 00:22:38.976 }, 00:22:38.976 "auth": { 00:22:38.976 "state": "completed", 00:22:38.976 "digest": "sha384", 00:22:38.976 "dhgroup": "ffdhe6144" 00:22:38.976 } 00:22:38.976 } 00:22:38.976 ]' 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:38.976 12:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:39.237 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:39.237 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:39.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:39.808 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.069 12:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:40.329 00:22:40.329 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:40.329 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:40.329 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:40.589 { 00:22:40.589 "cntlid": 83, 00:22:40.589 "qid": 0, 00:22:40.589 "state": "enabled", 00:22:40.589 "thread": "nvmf_tgt_poll_group_000", 00:22:40.589 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:40.589 "listen_address": { 00:22:40.589 "trtype": "TCP", 00:22:40.589 "adrfam": "IPv4", 00:22:40.589 "traddr": "10.0.0.2", 00:22:40.589 "trsvcid": "4420" 00:22:40.589 }, 00:22:40.589 "peer_address": { 00:22:40.589 "trtype": "TCP", 00:22:40.589 "adrfam": "IPv4", 00:22:40.589 "traddr": "10.0.0.1", 00:22:40.589 "trsvcid": "35238" 00:22:40.589 }, 00:22:40.589 "auth": { 00:22:40.589 "state": "completed", 00:22:40.589 "digest": "sha384", 00:22:40.589 "dhgroup": "ffdhe6144" 00:22:40.589 } 00:22:40.589 } 00:22:40.589 ]' 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:40.589 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.848 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:40.848 12:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.418 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.679 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:41.939 00:22:41.939 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.939 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.939 12:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:42.202 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:42.202 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:42.202 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.202 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:42.202 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.203 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:42.203 { 00:22:42.203 "cntlid": 85, 00:22:42.203 "qid": 0, 00:22:42.203 "state": "enabled", 00:22:42.203 "thread": "nvmf_tgt_poll_group_000", 00:22:42.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:42.203 "listen_address": { 00:22:42.203 "trtype": "TCP", 00:22:42.203 "adrfam": "IPv4", 00:22:42.203 "traddr": "10.0.0.2", 00:22:42.203 "trsvcid": "4420" 00:22:42.203 }, 00:22:42.203 "peer_address": { 00:22:42.203 "trtype": "TCP", 00:22:42.203 "adrfam": "IPv4", 00:22:42.203 "traddr": "10.0.0.1", 00:22:42.203 "trsvcid": "40730" 00:22:42.203 }, 00:22:42.203 "auth": { 00:22:42.203 "state": "completed", 00:22:42.203 "digest": "sha384", 00:22:42.203 "dhgroup": "ffdhe6144" 00:22:42.203 } 00:22:42.203 } 00:22:42.203 ]' 00:22:42.203 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:42.203 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:42.203 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:42.463 12:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.405 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:43.666 00:22:43.666 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:43.666 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:43.666 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:43.927 { 00:22:43.927 "cntlid": 87, 00:22:43.927 "qid": 0, 00:22:43.927 "state": "enabled", 00:22:43.927 "thread": "nvmf_tgt_poll_group_000", 00:22:43.927 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:43.927 "listen_address": { 00:22:43.927 "trtype": "TCP", 00:22:43.927 "adrfam": "IPv4", 00:22:43.927 "traddr": "10.0.0.2", 00:22:43.927 "trsvcid": "4420" 00:22:43.927 }, 00:22:43.927 "peer_address": { 00:22:43.927 "trtype": "TCP", 00:22:43.927 "adrfam": "IPv4", 00:22:43.927 "traddr": "10.0.0.1", 00:22:43.927 "trsvcid": "40762" 00:22:43.927 }, 00:22:43.927 "auth": { 00:22:43.927 "state": "completed", 00:22:43.927 "digest": "sha384", 00:22:43.927 "dhgroup": "ffdhe6144" 00:22:43.927 } 00:22:43.927 } 00:22:43.927 ]' 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:43.927 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.188 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.188 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.188 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.188 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:44.188 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:44.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:44.760 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.021 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:45.592 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.592 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:45.853 { 00:22:45.853 "cntlid": 89, 00:22:45.853 "qid": 0, 00:22:45.853 "state": "enabled", 00:22:45.853 "thread": "nvmf_tgt_poll_group_000", 00:22:45.853 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:45.853 "listen_address": { 00:22:45.853 "trtype": "TCP", 00:22:45.853 "adrfam": "IPv4", 00:22:45.853 "traddr": "10.0.0.2", 00:22:45.853 "trsvcid": "4420" 00:22:45.853 }, 00:22:45.853 "peer_address": { 00:22:45.853 "trtype": "TCP", 00:22:45.853 "adrfam": "IPv4", 00:22:45.853 "traddr": "10.0.0.1", 00:22:45.853 "trsvcid": "40792" 00:22:45.853 }, 00:22:45.853 "auth": { 00:22:45.853 "state": "completed", 00:22:45.853 "digest": "sha384", 00:22:45.853 "dhgroup": "ffdhe8192" 00:22:45.853 } 00:22:45.853 } 00:22:45.853 ]' 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:45.853 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:46.114 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:46.114 12:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:46.684 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.684 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:47.256 00:22:47.256 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:47.256 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:47.256 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:47.516 { 00:22:47.516 "cntlid": 91, 00:22:47.516 "qid": 0, 00:22:47.516 "state": "enabled", 00:22:47.516 "thread": "nvmf_tgt_poll_group_000", 00:22:47.516 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:47.516 "listen_address": { 00:22:47.516 "trtype": "TCP", 00:22:47.516 "adrfam": "IPv4", 00:22:47.516 "traddr": "10.0.0.2", 00:22:47.516 "trsvcid": "4420" 00:22:47.516 }, 00:22:47.516 "peer_address": { 00:22:47.516 "trtype": "TCP", 00:22:47.516 "adrfam": "IPv4", 00:22:47.516 "traddr": "10.0.0.1", 00:22:47.516 "trsvcid": "40814" 00:22:47.516 }, 00:22:47.516 "auth": { 00:22:47.516 "state": "completed", 00:22:47.516 "digest": "sha384", 00:22:47.516 "dhgroup": "ffdhe8192" 00:22:47.516 } 00:22:47.516 } 00:22:47.516 ]' 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:47.516 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.776 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:47.776 12:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.347 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.607 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.175 00:22:49.175 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.175 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.175 12:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.175 { 00:22:49.175 "cntlid": 93, 00:22:49.175 "qid": 0, 00:22:49.175 "state": "enabled", 00:22:49.175 "thread": "nvmf_tgt_poll_group_000", 00:22:49.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:49.175 "listen_address": { 00:22:49.175 "trtype": "TCP", 00:22:49.175 "adrfam": "IPv4", 00:22:49.175 "traddr": "10.0.0.2", 00:22:49.175 "trsvcid": "4420" 00:22:49.175 }, 00:22:49.175 "peer_address": { 00:22:49.175 "trtype": "TCP", 00:22:49.175 "adrfam": "IPv4", 00:22:49.175 "traddr": "10.0.0.1", 00:22:49.175 "trsvcid": "40838" 00:22:49.175 }, 00:22:49.175 "auth": { 00:22:49.175 "state": "completed", 00:22:49.175 "digest": "sha384", 00:22:49.175 "dhgroup": "ffdhe8192" 00:22:49.175 } 00:22:49.175 } 00:22:49.175 ]' 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:49.175 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:49.435 12:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.373 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.943 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:50.943 { 00:22:50.943 "cntlid": 95, 00:22:50.943 "qid": 0, 00:22:50.943 "state": "enabled", 00:22:50.943 "thread": "nvmf_tgt_poll_group_000", 00:22:50.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:50.943 "listen_address": { 00:22:50.943 "trtype": "TCP", 00:22:50.943 "adrfam": "IPv4", 00:22:50.943 "traddr": "10.0.0.2", 00:22:50.943 "trsvcid": "4420" 00:22:50.943 }, 00:22:50.943 "peer_address": { 00:22:50.943 "trtype": "TCP", 00:22:50.943 "adrfam": "IPv4", 00:22:50.943 "traddr": "10.0.0.1", 00:22:50.943 "trsvcid": "40868" 00:22:50.943 }, 00:22:50.943 "auth": { 00:22:50.943 "state": "completed", 00:22:50.943 "digest": "sha384", 00:22:50.943 "dhgroup": "ffdhe8192" 00:22:50.943 } 00:22:50.943 } 00:22:50.943 ]' 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:50.943 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.203 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:51.204 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.204 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.204 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.204 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:51.464 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:51.464 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:52.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.035 12:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.035 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:52.296 00:22:52.296 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:52.296 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:52.296 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:52.557 { 00:22:52.557 "cntlid": 97, 00:22:52.557 "qid": 0, 00:22:52.557 "state": "enabled", 00:22:52.557 "thread": "nvmf_tgt_poll_group_000", 00:22:52.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:52.557 "listen_address": { 00:22:52.557 "trtype": "TCP", 00:22:52.557 "adrfam": "IPv4", 00:22:52.557 "traddr": "10.0.0.2", 00:22:52.557 "trsvcid": "4420" 00:22:52.557 }, 00:22:52.557 "peer_address": { 00:22:52.557 "trtype": "TCP", 00:22:52.557 "adrfam": "IPv4", 00:22:52.557 "traddr": "10.0.0.1", 00:22:52.557 "trsvcid": "47202" 00:22:52.557 }, 00:22:52.557 "auth": { 00:22:52.557 "state": "completed", 00:22:52.557 "digest": "sha512", 00:22:52.557 "dhgroup": "null" 00:22:52.557 } 00:22:52.557 } 00:22:52.557 ]' 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:52.557 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.818 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:52.818 12:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:53.389 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.650 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:53.910 00:22:53.910 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.910 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.910 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.175 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.175 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.175 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.175 12:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.175 { 00:22:54.175 "cntlid": 99, 00:22:54.175 "qid": 0, 00:22:54.175 "state": "enabled", 00:22:54.175 "thread": "nvmf_tgt_poll_group_000", 00:22:54.175 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:54.175 "listen_address": { 00:22:54.175 "trtype": "TCP", 00:22:54.175 "adrfam": "IPv4", 00:22:54.175 "traddr": "10.0.0.2", 00:22:54.175 "trsvcid": "4420" 00:22:54.175 }, 00:22:54.175 "peer_address": { 00:22:54.175 "trtype": "TCP", 00:22:54.175 "adrfam": "IPv4", 00:22:54.175 "traddr": "10.0.0.1", 00:22:54.175 "trsvcid": "47234" 00:22:54.175 }, 00:22:54.175 "auth": { 00:22:54.175 "state": "completed", 00:22:54.175 "digest": "sha512", 00:22:54.175 "dhgroup": "null" 00:22:54.175 } 00:22:54.175 } 00:22:54.175 ]' 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.175 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.437 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:54.437 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.007 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.007 12:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.268 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:55.528 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.528 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:55.528 { 00:22:55.528 "cntlid": 101, 00:22:55.528 "qid": 0, 00:22:55.528 "state": "enabled", 00:22:55.528 "thread": "nvmf_tgt_poll_group_000", 00:22:55.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:55.528 "listen_address": { 00:22:55.528 "trtype": "TCP", 00:22:55.528 "adrfam": "IPv4", 00:22:55.528 "traddr": "10.0.0.2", 00:22:55.528 "trsvcid": "4420" 00:22:55.528 }, 00:22:55.528 "peer_address": { 00:22:55.528 "trtype": "TCP", 00:22:55.528 "adrfam": "IPv4", 00:22:55.528 "traddr": "10.0.0.1", 00:22:55.528 "trsvcid": "47262" 00:22:55.528 }, 00:22:55.529 "auth": { 00:22:55.529 "state": "completed", 00:22:55.529 "digest": "sha512", 00:22:55.529 "dhgroup": "null" 00:22:55.529 } 00:22:55.529 } 00:22:55.529 ]' 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:55.790 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:56.050 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:56.050 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:56.622 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:56.622 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.883 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:56.883 00:22:57.143 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:57.143 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:57.143 12:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:57.143 { 00:22:57.143 "cntlid": 103, 00:22:57.143 "qid": 0, 00:22:57.143 "state": "enabled", 00:22:57.143 "thread": "nvmf_tgt_poll_group_000", 00:22:57.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:57.143 "listen_address": { 00:22:57.143 "trtype": "TCP", 00:22:57.143 "adrfam": "IPv4", 00:22:57.143 "traddr": "10.0.0.2", 00:22:57.143 "trsvcid": "4420" 00:22:57.143 }, 00:22:57.143 "peer_address": { 00:22:57.143 "trtype": "TCP", 00:22:57.143 "adrfam": "IPv4", 00:22:57.143 "traddr": "10.0.0.1", 00:22:57.143 "trsvcid": "47304" 00:22:57.143 }, 00:22:57.143 "auth": { 00:22:57.143 "state": "completed", 00:22:57.143 "digest": "sha512", 00:22:57.143 "dhgroup": "null" 00:22:57.143 } 00:22:57.143 } 00:22:57.143 ]' 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:57.143 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:57.404 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:22:57.975 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.237 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:58.498 00:22:58.499 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:58.499 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:58.499 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:58.760 { 00:22:58.760 "cntlid": 105, 00:22:58.760 "qid": 0, 00:22:58.760 "state": "enabled", 00:22:58.760 "thread": "nvmf_tgt_poll_group_000", 00:22:58.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:22:58.760 "listen_address": { 00:22:58.760 "trtype": "TCP", 00:22:58.760 "adrfam": "IPv4", 00:22:58.760 "traddr": "10.0.0.2", 00:22:58.760 "trsvcid": "4420" 00:22:58.760 }, 00:22:58.760 "peer_address": { 00:22:58.760 "trtype": "TCP", 00:22:58.760 "adrfam": "IPv4", 00:22:58.760 "traddr": "10.0.0.1", 00:22:58.760 "trsvcid": "47340" 00:22:58.760 }, 00:22:58.760 "auth": { 00:22:58.760 "state": "completed", 00:22:58.760 "digest": "sha512", 00:22:58.760 "dhgroup": "ffdhe2048" 00:22:58.760 } 00:22:58.760 } 00:22:58.760 ]' 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:58.760 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:58.761 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:58.761 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:59.022 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:59.022 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:59.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:59.593 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:59.854 12:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:00.114 00:23:00.114 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:00.114 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:00.114 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:00.374 { 00:23:00.374 "cntlid": 107, 00:23:00.374 "qid": 0, 00:23:00.374 "state": "enabled", 00:23:00.374 "thread": "nvmf_tgt_poll_group_000", 00:23:00.374 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:00.374 "listen_address": { 00:23:00.374 "trtype": "TCP", 00:23:00.374 "adrfam": "IPv4", 00:23:00.374 "traddr": "10.0.0.2", 00:23:00.374 "trsvcid": "4420" 00:23:00.374 }, 00:23:00.374 "peer_address": { 00:23:00.374 "trtype": "TCP", 00:23:00.374 "adrfam": "IPv4", 00:23:00.374 "traddr": "10.0.0.1", 00:23:00.374 "trsvcid": "47376" 00:23:00.374 }, 00:23:00.374 "auth": { 00:23:00.374 "state": "completed", 00:23:00.374 "digest": "sha512", 00:23:00.374 "dhgroup": "ffdhe2048" 00:23:00.374 } 00:23:00.374 } 00:23:00.374 ]' 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:00.374 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.635 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:00.635 12:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:01.214 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.475 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:01.735 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.735 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:01.735 { 00:23:01.735 "cntlid": 109, 00:23:01.735 "qid": 0, 00:23:01.735 "state": "enabled", 00:23:01.735 "thread": "nvmf_tgt_poll_group_000", 00:23:01.735 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:01.735 "listen_address": { 00:23:01.735 "trtype": "TCP", 00:23:01.735 "adrfam": "IPv4", 00:23:01.735 "traddr": "10.0.0.2", 00:23:01.735 "trsvcid": "4420" 00:23:01.735 }, 00:23:01.735 "peer_address": { 00:23:01.735 "trtype": "TCP", 00:23:01.735 "adrfam": "IPv4", 00:23:01.735 "traddr": "10.0.0.1", 00:23:01.735 "trsvcid": "47404" 00:23:01.735 }, 00:23:01.735 "auth": { 00:23:01.735 "state": "completed", 00:23:01.735 "digest": "sha512", 00:23:01.735 "dhgroup": "ffdhe2048" 00:23:01.735 } 00:23:01.735 } 00:23:01.735 ]' 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:01.996 12:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.255 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:02.255 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:02.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:02.825 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.085 12:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:03.085 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:03.345 { 00:23:03.345 "cntlid": 111, 00:23:03.345 "qid": 0, 00:23:03.345 "state": "enabled", 00:23:03.345 "thread": "nvmf_tgt_poll_group_000", 00:23:03.345 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:03.345 "listen_address": { 00:23:03.345 "trtype": "TCP", 00:23:03.345 "adrfam": "IPv4", 00:23:03.345 "traddr": "10.0.0.2", 00:23:03.345 "trsvcid": "4420" 00:23:03.345 }, 00:23:03.345 "peer_address": { 00:23:03.345 "trtype": "TCP", 00:23:03.345 "adrfam": "IPv4", 00:23:03.345 "traddr": "10.0.0.1", 00:23:03.345 "trsvcid": "47514" 00:23:03.345 }, 00:23:03.345 "auth": { 00:23:03.345 "state": "completed", 00:23:03.345 "digest": "sha512", 00:23:03.345 "dhgroup": "ffdhe2048" 00:23:03.345 } 00:23:03.345 } 00:23:03.345 ]' 00:23:03.345 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:03.606 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:03.866 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:03.866 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:04.436 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:04.436 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.437 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.697 00:23:04.697 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.698 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.698 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:04.958 { 00:23:04.958 "cntlid": 113, 00:23:04.958 "qid": 0, 00:23:04.958 "state": "enabled", 00:23:04.958 "thread": "nvmf_tgt_poll_group_000", 00:23:04.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:04.958 "listen_address": { 00:23:04.958 "trtype": "TCP", 00:23:04.958 "adrfam": "IPv4", 00:23:04.958 "traddr": "10.0.0.2", 00:23:04.958 "trsvcid": "4420" 00:23:04.958 }, 00:23:04.958 "peer_address": { 00:23:04.958 "trtype": "TCP", 00:23:04.958 "adrfam": "IPv4", 00:23:04.958 "traddr": "10.0.0.1", 00:23:04.958 "trsvcid": "47552" 00:23:04.958 }, 00:23:04.958 "auth": { 00:23:04.958 "state": "completed", 00:23:04.958 "digest": "sha512", 00:23:04.958 "dhgroup": "ffdhe3072" 00:23:04.958 } 00:23:04.958 } 00:23:04.958 ]' 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:04.958 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:04.958 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:04.958 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.218 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.218 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.218 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.218 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:05.218 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:05.789 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:05.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:05.789 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:05.789 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.789 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.050 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.050 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:06.050 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.050 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.050 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:06.311 00:23:06.311 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:06.311 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:06.311 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:06.571 { 00:23:06.571 "cntlid": 115, 00:23:06.571 "qid": 0, 00:23:06.571 "state": "enabled", 00:23:06.571 "thread": "nvmf_tgt_poll_group_000", 00:23:06.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:06.571 "listen_address": { 00:23:06.571 "trtype": "TCP", 00:23:06.571 "adrfam": "IPv4", 00:23:06.571 "traddr": "10.0.0.2", 00:23:06.571 "trsvcid": "4420" 00:23:06.571 }, 00:23:06.571 "peer_address": { 00:23:06.571 "trtype": "TCP", 00:23:06.571 "adrfam": "IPv4", 00:23:06.571 "traddr": "10.0.0.1", 00:23:06.571 "trsvcid": "47574" 00:23:06.571 }, 00:23:06.571 "auth": { 00:23:06.571 "state": "completed", 00:23:06.571 "digest": "sha512", 00:23:06.571 "dhgroup": "ffdhe3072" 00:23:06.571 } 00:23:06.571 } 00:23:06.571 ]' 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:06.571 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:06.830 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:06.830 12:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:07.400 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.400 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.661 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:07.921 00:23:07.921 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:07.921 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:07.921 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:08.182 { 00:23:08.182 "cntlid": 117, 00:23:08.182 "qid": 0, 00:23:08.182 "state": "enabled", 00:23:08.182 "thread": "nvmf_tgt_poll_group_000", 00:23:08.182 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:08.182 "listen_address": { 00:23:08.182 "trtype": "TCP", 00:23:08.182 "adrfam": "IPv4", 00:23:08.182 "traddr": "10.0.0.2", 00:23:08.182 "trsvcid": "4420" 00:23:08.182 }, 00:23:08.182 "peer_address": { 00:23:08.182 "trtype": "TCP", 00:23:08.182 "adrfam": "IPv4", 00:23:08.182 "traddr": "10.0.0.1", 00:23:08.182 "trsvcid": "47592" 00:23:08.182 }, 00:23:08.182 "auth": { 00:23:08.182 "state": "completed", 00:23:08.182 "digest": "sha512", 00:23:08.182 "dhgroup": "ffdhe3072" 00:23:08.182 } 00:23:08.182 } 00:23:08.182 ]' 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:08.182 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:08.442 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:08.442 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:09.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:09.011 12:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.271 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.532 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:09.532 { 00:23:09.532 "cntlid": 119, 00:23:09.532 "qid": 0, 00:23:09.532 "state": "enabled", 00:23:09.532 "thread": "nvmf_tgt_poll_group_000", 00:23:09.532 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:09.532 "listen_address": { 00:23:09.532 "trtype": "TCP", 00:23:09.532 "adrfam": "IPv4", 00:23:09.532 "traddr": "10.0.0.2", 00:23:09.532 "trsvcid": "4420" 00:23:09.532 }, 00:23:09.532 "peer_address": { 00:23:09.532 "trtype": "TCP", 00:23:09.532 "adrfam": "IPv4", 00:23:09.532 "traddr": "10.0.0.1", 00:23:09.532 "trsvcid": "47632" 00:23:09.532 }, 00:23:09.532 "auth": { 00:23:09.532 "state": "completed", 00:23:09.532 "digest": "sha512", 00:23:09.532 "dhgroup": "ffdhe3072" 00:23:09.532 } 00:23:09.532 } 00:23:09.532 ]' 00:23:09.532 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:09.794 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:10.054 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:10.054 12:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:10.630 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:10.630 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.951 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.951 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.951 00:23:10.951 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:10.951 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:10.951 12:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.246 { 00:23:11.246 "cntlid": 121, 00:23:11.246 "qid": 0, 00:23:11.246 "state": "enabled", 00:23:11.246 "thread": "nvmf_tgt_poll_group_000", 00:23:11.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:11.246 "listen_address": { 00:23:11.246 "trtype": "TCP", 00:23:11.246 "adrfam": "IPv4", 00:23:11.246 "traddr": "10.0.0.2", 00:23:11.246 "trsvcid": "4420" 00:23:11.246 }, 00:23:11.246 "peer_address": { 00:23:11.246 "trtype": "TCP", 00:23:11.246 "adrfam": "IPv4", 00:23:11.246 "traddr": "10.0.0.1", 00:23:11.246 "trsvcid": "47678" 00:23:11.246 }, 00:23:11.246 "auth": { 00:23:11.246 "state": "completed", 00:23:11.246 "digest": "sha512", 00:23:11.246 "dhgroup": "ffdhe4096" 00:23:11.246 } 00:23:11.246 } 00:23:11.246 ]' 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.246 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.543 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:11.543 12:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:12.116 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.377 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.637 00:23:12.637 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:12.637 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:12.637 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:12.898 { 00:23:12.898 "cntlid": 123, 00:23:12.898 "qid": 0, 00:23:12.898 "state": "enabled", 00:23:12.898 "thread": "nvmf_tgt_poll_group_000", 00:23:12.898 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:12.898 "listen_address": { 00:23:12.898 "trtype": "TCP", 00:23:12.898 "adrfam": "IPv4", 00:23:12.898 "traddr": "10.0.0.2", 00:23:12.898 "trsvcid": "4420" 00:23:12.898 }, 00:23:12.898 "peer_address": { 00:23:12.898 "trtype": "TCP", 00:23:12.898 "adrfam": "IPv4", 00:23:12.898 "traddr": "10.0.0.1", 00:23:12.898 "trsvcid": "50232" 00:23:12.898 }, 00:23:12.898 "auth": { 00:23:12.898 "state": "completed", 00:23:12.898 "digest": "sha512", 00:23:12.898 "dhgroup": "ffdhe4096" 00:23:12.898 } 00:23:12.898 } 00:23:12.898 ]' 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:12.898 12:06:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:13.159 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:13.159 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:13.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.730 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:13.991 12:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.253 00:23:14.253 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:14.253 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.254 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:14.254 { 00:23:14.254 "cntlid": 125, 00:23:14.254 "qid": 0, 00:23:14.254 "state": "enabled", 00:23:14.254 "thread": "nvmf_tgt_poll_group_000", 00:23:14.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:14.254 "listen_address": { 00:23:14.254 "trtype": "TCP", 00:23:14.254 "adrfam": "IPv4", 00:23:14.254 "traddr": "10.0.0.2", 00:23:14.254 "trsvcid": "4420" 00:23:14.254 }, 00:23:14.254 "peer_address": { 00:23:14.254 "trtype": "TCP", 00:23:14.254 "adrfam": "IPv4", 00:23:14.254 "traddr": "10.0.0.1", 00:23:14.254 "trsvcid": "50246" 00:23:14.254 }, 00:23:14.254 "auth": { 00:23:14.254 "state": "completed", 00:23:14.254 "digest": "sha512", 00:23:14.254 "dhgroup": "ffdhe4096" 00:23:14.254 } 00:23:14.254 } 00:23:14.254 ]' 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:14.515 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:14.775 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:14.775 12:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:15.346 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.346 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.608 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:15.869 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.869 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:15.869 { 00:23:15.869 "cntlid": 127, 00:23:15.869 "qid": 0, 00:23:15.869 "state": "enabled", 00:23:15.869 "thread": "nvmf_tgt_poll_group_000", 00:23:15.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:15.869 "listen_address": { 00:23:15.869 "trtype": "TCP", 00:23:15.869 "adrfam": "IPv4", 00:23:15.869 "traddr": "10.0.0.2", 00:23:15.869 "trsvcid": "4420" 00:23:15.869 }, 00:23:15.869 "peer_address": { 00:23:15.869 "trtype": "TCP", 00:23:15.869 "adrfam": "IPv4", 00:23:15.869 "traddr": "10.0.0.1", 00:23:15.869 "trsvcid": "50264" 00:23:15.869 }, 00:23:15.869 "auth": { 00:23:15.869 "state": "completed", 00:23:15.869 "digest": "sha512", 00:23:15.869 "dhgroup": "ffdhe4096" 00:23:15.869 } 00:23:15.869 } 00:23:15.869 ]' 00:23:16.130 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:16.130 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:16.130 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:16.130 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:23:16.130 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:16.130 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:16.130 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:16.130 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:16.390 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:16.390 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:16.961 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:16.961 12:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.221 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:17.482 00:23:17.482 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:17.482 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:17.482 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:17.743 { 00:23:17.743 "cntlid": 129, 00:23:17.743 "qid": 0, 00:23:17.743 "state": "enabled", 00:23:17.743 "thread": "nvmf_tgt_poll_group_000", 00:23:17.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:17.743 "listen_address": { 00:23:17.743 "trtype": "TCP", 00:23:17.743 "adrfam": "IPv4", 00:23:17.743 "traddr": "10.0.0.2", 00:23:17.743 "trsvcid": "4420" 00:23:17.743 }, 00:23:17.743 "peer_address": { 00:23:17.743 "trtype": "TCP", 00:23:17.743 "adrfam": "IPv4", 00:23:17.743 "traddr": "10.0.0.1", 00:23:17.743 "trsvcid": "50288" 00:23:17.743 }, 00:23:17.743 "auth": { 00:23:17.743 "state": "completed", 00:23:17.743 "digest": "sha512", 00:23:17.743 "dhgroup": "ffdhe6144" 00:23:17.743 } 00:23:17.743 } 00:23:17.743 ]' 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:17.743 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.005 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:18.005 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:18.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.576 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:18.836 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.096 00:23:19.096 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:19.096 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:19.096 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:19.357 { 00:23:19.357 "cntlid": 131, 00:23:19.357 "qid": 0, 00:23:19.357 "state": "enabled", 00:23:19.357 "thread": "nvmf_tgt_poll_group_000", 00:23:19.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:19.357 "listen_address": { 00:23:19.357 "trtype": "TCP", 00:23:19.357 "adrfam": "IPv4", 00:23:19.357 "traddr": "10.0.0.2", 00:23:19.357 "trsvcid": "4420" 00:23:19.357 }, 00:23:19.357 "peer_address": { 00:23:19.357 "trtype": "TCP", 00:23:19.357 "adrfam": "IPv4", 00:23:19.357 "traddr": "10.0.0.1", 00:23:19.357 "trsvcid": "50324" 00:23:19.357 }, 00:23:19.357 "auth": { 00:23:19.357 "state": "completed", 00:23:19.357 "digest": "sha512", 00:23:19.357 "dhgroup": "ffdhe6144" 00:23:19.357 } 00:23:19.357 } 00:23:19.357 ]' 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:19.357 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:19.358 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:19.358 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:19.358 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:19.358 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:19.358 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:19.618 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:19.618 12:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:20.197 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:20.197 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:20.197 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:20.197 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.197 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.197 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.197 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:20.198 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.198 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.464 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.724 00:23:20.724 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:20.724 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:20.724 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:20.984 { 00:23:20.984 "cntlid": 133, 00:23:20.984 "qid": 0, 00:23:20.984 "state": "enabled", 00:23:20.984 "thread": "nvmf_tgt_poll_group_000", 00:23:20.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:20.984 "listen_address": { 00:23:20.984 "trtype": "TCP", 00:23:20.984 "adrfam": "IPv4", 00:23:20.984 "traddr": "10.0.0.2", 00:23:20.984 "trsvcid": "4420" 00:23:20.984 }, 00:23:20.984 "peer_address": { 00:23:20.984 "trtype": "TCP", 00:23:20.984 "adrfam": "IPv4", 00:23:20.984 "traddr": "10.0.0.1", 00:23:20.984 "trsvcid": "50350" 00:23:20.984 }, 00:23:20.984 "auth": { 00:23:20.984 "state": "completed", 00:23:20.984 "digest": "sha512", 00:23:20.984 "dhgroup": "ffdhe6144" 00:23:20.984 } 00:23:20.984 } 00:23:20.984 ]' 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:20.984 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.245 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:21.245 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:21.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:21.815 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.075 12:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:22.335 00:23:22.335 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:22.335 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:22.335 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:22.595 { 00:23:22.595 "cntlid": 135, 00:23:22.595 "qid": 0, 00:23:22.595 "state": "enabled", 00:23:22.595 "thread": "nvmf_tgt_poll_group_000", 00:23:22.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:22.595 "listen_address": { 00:23:22.595 "trtype": "TCP", 00:23:22.595 "adrfam": "IPv4", 00:23:22.595 "traddr": "10.0.0.2", 00:23:22.595 "trsvcid": "4420" 00:23:22.595 }, 00:23:22.595 "peer_address": { 00:23:22.595 "trtype": "TCP", 00:23:22.595 "adrfam": "IPv4", 00:23:22.595 "traddr": "10.0.0.1", 00:23:22.595 "trsvcid": "36226" 00:23:22.595 }, 00:23:22.595 "auth": { 00:23:22.595 "state": "completed", 00:23:22.595 "digest": "sha512", 00:23:22.595 "dhgroup": "ffdhe6144" 00:23:22.595 } 00:23:22.595 } 00:23:22.595 ]' 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.595 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.855 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:22.855 12:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:23.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.424 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.684 12:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.255 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:24.255 { 00:23:24.255 "cntlid": 137, 00:23:24.255 "qid": 0, 00:23:24.255 "state": "enabled", 00:23:24.255 "thread": "nvmf_tgt_poll_group_000", 00:23:24.255 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:24.255 "listen_address": { 00:23:24.255 "trtype": "TCP", 00:23:24.255 "adrfam": "IPv4", 00:23:24.255 "traddr": "10.0.0.2", 00:23:24.255 "trsvcid": "4420" 00:23:24.255 }, 00:23:24.255 "peer_address": { 00:23:24.255 "trtype": "TCP", 00:23:24.255 "adrfam": "IPv4", 00:23:24.255 "traddr": "10.0.0.1", 00:23:24.255 "trsvcid": "36256" 00:23:24.255 }, 00:23:24.255 "auth": { 00:23:24.255 "state": "completed", 00:23:24.255 "digest": "sha512", 00:23:24.255 "dhgroup": "ffdhe8192" 00:23:24.255 } 00:23:24.255 } 00:23:24.255 ]' 00:23:24.255 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:24.514 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:24.773 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:24.773 12:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:25.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.366 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.934 00:23:25.934 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:25.934 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:25.934 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:26.193 { 00:23:26.193 "cntlid": 139, 00:23:26.193 "qid": 0, 00:23:26.193 "state": "enabled", 00:23:26.193 "thread": "nvmf_tgt_poll_group_000", 00:23:26.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:26.193 "listen_address": { 00:23:26.193 "trtype": "TCP", 00:23:26.193 "adrfam": "IPv4", 00:23:26.193 "traddr": "10.0.0.2", 00:23:26.193 "trsvcid": "4420" 00:23:26.193 }, 00:23:26.193 "peer_address": { 00:23:26.193 "trtype": "TCP", 00:23:26.193 "adrfam": "IPv4", 00:23:26.193 "traddr": "10.0.0.1", 00:23:26.193 "trsvcid": "36298" 00:23:26.193 }, 00:23:26.193 "auth": { 00:23:26.193 "state": "completed", 00:23:26.193 "digest": "sha512", 00:23:26.193 "dhgroup": "ffdhe8192" 00:23:26.193 } 00:23:26.193 } 00:23:26.193 ]' 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:26.193 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:26.194 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:26.453 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:26.453 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: --dhchap-ctrl-secret DHHC-1:02:MTA3Y2JmMjhlYzIzZTI1YjNiNDA3N2MxN2JhNjM5MDJmYmY5NzViMGQwYmVlODVkdxxyTA==: 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:27.023 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.023 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.284 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:27.854 00:23:27.854 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:27.854 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:27.854 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:27.854 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.854 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:27.854 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:27.855 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:27.855 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:27.855 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:27.855 { 00:23:27.855 "cntlid": 141, 00:23:27.855 "qid": 0, 00:23:27.855 "state": "enabled", 00:23:27.855 "thread": "nvmf_tgt_poll_group_000", 00:23:27.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:27.855 "listen_address": { 00:23:27.855 "trtype": "TCP", 00:23:27.855 "adrfam": "IPv4", 00:23:27.855 "traddr": "10.0.0.2", 00:23:27.855 "trsvcid": "4420" 00:23:27.855 }, 00:23:27.855 "peer_address": { 00:23:27.855 "trtype": "TCP", 00:23:27.855 "adrfam": "IPv4", 00:23:27.855 "traddr": "10.0.0.1", 00:23:27.855 "trsvcid": "36328" 00:23:27.855 }, 00:23:27.855 "auth": { 00:23:27.855 "state": "completed", 00:23:27.855 "digest": "sha512", 00:23:27.855 "dhgroup": "ffdhe8192" 00:23:27.855 } 00:23:27.855 } 00:23:27.855 ]' 00:23:27.855 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:28.115 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:28.115 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:28.115 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:28.115 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:28.115 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:28.115 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:28.115 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:28.376 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:28.376 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:01:MWFkMmQ0M2MzNGE5NWExZTIxNWIxOWEyMGU5YTExODnIlQCh: 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:28.949 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:28.949 12:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.210 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:29.472 00:23:29.472 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:29.472 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:29.472 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:29.733 { 00:23:29.733 "cntlid": 143, 00:23:29.733 "qid": 0, 00:23:29.733 "state": "enabled", 00:23:29.733 "thread": "nvmf_tgt_poll_group_000", 00:23:29.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:29.733 "listen_address": { 00:23:29.733 "trtype": "TCP", 00:23:29.733 "adrfam": "IPv4", 00:23:29.733 "traddr": "10.0.0.2", 00:23:29.733 "trsvcid": "4420" 00:23:29.733 }, 00:23:29.733 "peer_address": { 00:23:29.733 "trtype": "TCP", 00:23:29.733 "adrfam": "IPv4", 00:23:29.733 "traddr": "10.0.0.1", 00:23:29.733 "trsvcid": "36368" 00:23:29.733 }, 00:23:29.733 "auth": { 00:23:29.733 "state": "completed", 00:23:29.733 "digest": "sha512", 00:23:29.733 "dhgroup": "ffdhe8192" 00:23:29.733 } 00:23:29.733 } 00:23:29.733 ]' 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:29.733 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:29.995 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:29.995 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:29.995 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:29.995 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:29.995 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:30.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:30.572 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.833 12:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:31.404 00:23:31.404 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:31.404 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:31.404 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:31.665 { 00:23:31.665 "cntlid": 145, 00:23:31.665 "qid": 0, 00:23:31.665 "state": "enabled", 00:23:31.665 "thread": "nvmf_tgt_poll_group_000", 00:23:31.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:31.665 "listen_address": { 00:23:31.665 "trtype": "TCP", 00:23:31.665 "adrfam": "IPv4", 00:23:31.665 "traddr": "10.0.0.2", 00:23:31.665 "trsvcid": "4420" 00:23:31.665 }, 00:23:31.665 "peer_address": { 00:23:31.665 "trtype": "TCP", 00:23:31.665 "adrfam": "IPv4", 00:23:31.665 "traddr": "10.0.0.1", 00:23:31.665 "trsvcid": "36402" 00:23:31.665 }, 00:23:31.665 "auth": { 00:23:31.665 "state": "completed", 00:23:31.665 "digest": "sha512", 00:23:31.665 "dhgroup": "ffdhe8192" 00:23:31.665 } 00:23:31.665 } 00:23:31.665 ]' 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:31.665 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:31.926 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:31.926 12:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:00:YjA4NjFlYmQzMDI4MDVlNDUxZjQ0ZWY4Y2Y3ZmExMjQ4OGMwOWViZDljOTVmMzVln4Z1/g==: --dhchap-ctrl-secret DHHC-1:03:NjRjN2ZjYzVlNmIyYzMzMmVkZWM2NDIzMmY3YWQ5MGJhMmIwYWM2MmQzNWU5ZThkMmMwYjQxZjBlMDRhMzg1ZF17e7s=: 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:32.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:32.497 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:33.099 request: 00:23:33.099 { 00:23:33.099 "name": "nvme0", 00:23:33.099 "trtype": "tcp", 00:23:33.099 "traddr": "10.0.0.2", 00:23:33.099 "adrfam": "ipv4", 00:23:33.099 "trsvcid": "4420", 00:23:33.099 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:33.099 "prchk_reftag": false, 00:23:33.099 "prchk_guard": false, 00:23:33.099 "hdgst": false, 00:23:33.099 "ddgst": false, 00:23:33.099 "dhchap_key": "key2", 00:23:33.099 "allow_unrecognized_csi": false, 00:23:33.099 "method": "bdev_nvme_attach_controller", 00:23:33.099 "req_id": 1 00:23:33.099 } 00:23:33.099 Got JSON-RPC error response 00:23:33.099 response: 00:23:33.099 { 00:23:33.099 "code": -5, 00:23:33.099 "message": "Input/output error" 00:23:33.099 } 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.099 12:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:33.361 request: 00:23:33.361 { 00:23:33.361 "name": "nvme0", 00:23:33.361 "trtype": "tcp", 00:23:33.361 "traddr": "10.0.0.2", 00:23:33.361 "adrfam": "ipv4", 00:23:33.361 "trsvcid": "4420", 00:23:33.361 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:33.361 "prchk_reftag": false, 00:23:33.361 "prchk_guard": false, 00:23:33.361 "hdgst": false, 00:23:33.361 "ddgst": false, 00:23:33.361 "dhchap_key": "key1", 00:23:33.361 "dhchap_ctrlr_key": "ckey2", 00:23:33.361 "allow_unrecognized_csi": false, 00:23:33.361 "method": "bdev_nvme_attach_controller", 00:23:33.361 "req_id": 1 00:23:33.361 } 00:23:33.361 Got JSON-RPC error response 00:23:33.361 response: 00:23:33.361 { 00:23:33.361 "code": -5, 00:23:33.361 "message": "Input/output error" 00:23:33.361 } 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.361 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:33.932 request: 00:23:33.932 { 00:23:33.932 "name": "nvme0", 00:23:33.932 "trtype": "tcp", 00:23:33.932 "traddr": "10.0.0.2", 00:23:33.932 "adrfam": "ipv4", 00:23:33.932 "trsvcid": "4420", 00:23:33.932 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:33.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:33.932 "prchk_reftag": false, 00:23:33.932 "prchk_guard": false, 00:23:33.932 "hdgst": false, 00:23:33.932 "ddgst": false, 00:23:33.932 "dhchap_key": "key1", 00:23:33.932 "dhchap_ctrlr_key": "ckey1", 00:23:33.932 "allow_unrecognized_csi": false, 00:23:33.932 "method": "bdev_nvme_attach_controller", 00:23:33.932 "req_id": 1 00:23:33.932 } 00:23:33.932 Got JSON-RPC error response 00:23:33.932 response: 00:23:33.932 { 00:23:33.932 "code": -5, 00:23:33.932 "message": "Input/output error" 00:23:33.932 } 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1335646 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1335646 ']' 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1335646 00:23:33.932 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1335646 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1335646' 00:23:33.933 killing process with pid 1335646 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1335646 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1335646 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1361135 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1361135 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1361135 ']' 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.933 12:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.874 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.874 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:34.874 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:34.874 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.874 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1361135 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1361135 ']' 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.875 12:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.135 null0 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.rsA 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.135 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.395 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.395 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.AUS ]] 00:23:35.395 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.AUS 00:23:35.395 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.dkV 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.gIz ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.gIz 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.eog 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.pVe ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.pVe 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Or6 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:35.396 12:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:35.967 nvme0n1 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:36.227 { 00:23:36.227 "cntlid": 1, 00:23:36.227 "qid": 0, 00:23:36.227 "state": "enabled", 00:23:36.227 "thread": "nvmf_tgt_poll_group_000", 00:23:36.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:36.227 "listen_address": { 00:23:36.227 "trtype": "TCP", 00:23:36.227 "adrfam": "IPv4", 00:23:36.227 "traddr": "10.0.0.2", 00:23:36.227 "trsvcid": "4420" 00:23:36.227 }, 00:23:36.227 "peer_address": { 00:23:36.227 "trtype": "TCP", 00:23:36.227 "adrfam": "IPv4", 00:23:36.227 "traddr": "10.0.0.1", 00:23:36.227 "trsvcid": "44536" 00:23:36.227 }, 00:23:36.227 "auth": { 00:23:36.227 "state": "completed", 00:23:36.227 "digest": "sha512", 00:23:36.227 "dhgroup": "ffdhe8192" 00:23:36.227 } 00:23:36.227 } 00:23:36.227 ]' 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:36.227 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:36.487 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:36.487 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:36.487 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:36.487 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:36.487 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:36.747 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:36.747 12:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:37.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:37.318 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.579 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.579 request: 00:23:37.579 { 00:23:37.579 "name": "nvme0", 00:23:37.580 "trtype": "tcp", 00:23:37.580 "traddr": "10.0.0.2", 00:23:37.580 "adrfam": "ipv4", 00:23:37.580 "trsvcid": "4420", 00:23:37.580 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:37.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:37.580 "prchk_reftag": false, 00:23:37.580 "prchk_guard": false, 00:23:37.580 "hdgst": false, 00:23:37.580 "ddgst": false, 00:23:37.580 "dhchap_key": "key3", 00:23:37.580 "allow_unrecognized_csi": false, 00:23:37.580 "method": "bdev_nvme_attach_controller", 00:23:37.580 "req_id": 1 00:23:37.580 } 00:23:37.580 Got JSON-RPC error response 00:23:37.580 response: 00:23:37.580 { 00:23:37.580 "code": -5, 00:23:37.580 "message": "Input/output error" 00:23:37.580 } 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:37.580 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.841 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:37.841 request: 00:23:37.841 { 00:23:37.841 "name": "nvme0", 00:23:37.841 "trtype": "tcp", 00:23:37.841 "traddr": "10.0.0.2", 00:23:37.841 "adrfam": "ipv4", 00:23:37.841 "trsvcid": "4420", 00:23:37.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:37.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:37.841 "prchk_reftag": false, 00:23:37.841 "prchk_guard": false, 00:23:37.841 "hdgst": false, 00:23:37.841 "ddgst": false, 00:23:37.841 "dhchap_key": "key3", 00:23:37.841 "allow_unrecognized_csi": false, 00:23:37.841 "method": "bdev_nvme_attach_controller", 00:23:37.841 "req_id": 1 00:23:37.841 } 00:23:37.841 Got JSON-RPC error response 00:23:37.841 response: 00:23:37.841 { 00:23:37.841 "code": -5, 00:23:37.841 "message": "Input/output error" 00:23:37.841 } 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.102 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.102 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:38.363 request: 00:23:38.363 { 00:23:38.363 "name": "nvme0", 00:23:38.363 "trtype": "tcp", 00:23:38.363 "traddr": "10.0.0.2", 00:23:38.363 "adrfam": "ipv4", 00:23:38.363 "trsvcid": "4420", 00:23:38.363 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:38.363 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:38.363 "prchk_reftag": false, 00:23:38.363 "prchk_guard": false, 00:23:38.363 "hdgst": false, 00:23:38.363 "ddgst": false, 00:23:38.363 "dhchap_key": "key0", 00:23:38.363 "dhchap_ctrlr_key": "key1", 00:23:38.363 "allow_unrecognized_csi": false, 00:23:38.363 "method": "bdev_nvme_attach_controller", 00:23:38.363 "req_id": 1 00:23:38.363 } 00:23:38.363 Got JSON-RPC error response 00:23:38.363 response: 00:23:38.363 { 00:23:38.363 "code": -5, 00:23:38.363 "message": "Input/output error" 00:23:38.363 } 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:38.623 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:38.623 nvme0n1 00:23:38.883 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:38.883 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:38.883 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:38.883 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.883 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:38.883 12:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:39.144 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:40.084 nvme0n1 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:40.084 12:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:40.343 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.343 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:40.343 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be -l 0 --dhchap-secret DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: --dhchap-ctrl-secret DHHC-1:03:NDIyMmI3NjNiYzFjNTRjYTY0MGM0MmYxNjRiZGRkODI5ZTgzZjI2ODNiOGE4MDVjMjM0YjIxYzViMmQyNjVhOVrNz5c=: 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:40.912 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:41.483 request: 00:23:41.483 { 00:23:41.483 "name": "nvme0", 00:23:41.483 "trtype": "tcp", 00:23:41.483 "traddr": "10.0.0.2", 00:23:41.483 "adrfam": "ipv4", 00:23:41.483 "trsvcid": "4420", 00:23:41.483 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:41.483 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:23:41.483 "prchk_reftag": false, 00:23:41.483 "prchk_guard": false, 00:23:41.483 "hdgst": false, 00:23:41.483 "ddgst": false, 00:23:41.483 "dhchap_key": "key1", 00:23:41.483 "allow_unrecognized_csi": false, 00:23:41.483 "method": "bdev_nvme_attach_controller", 00:23:41.483 "req_id": 1 00:23:41.483 } 00:23:41.483 Got JSON-RPC error response 00:23:41.483 response: 00:23:41.483 { 00:23:41.483 "code": -5, 00:23:41.483 "message": "Input/output error" 00:23:41.483 } 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:41.483 12:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:42.053 nvme0n1 00:23:42.313 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:42.313 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:42.313 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:42.313 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.313 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:42.313 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:42.574 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:42.835 nvme0n1 00:23:42.835 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:42.835 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:42.835 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:43.096 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.096 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:43.096 12:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: '' 2s 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: ]] 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTQ2ZTQ2ZjI1YjlhMDIyN2I2ZWJjMjcwNGQyNzg5NzIlVKBi: 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:43.096 12:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: 2s 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: ]] 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:ZDg1MTEwMTk1ZjQ1NDRlZjVjYmYxZDJhYmYyNTgyZDQ2MTJiNGYwZTc1NGUzMTcx3xhWIQ==: 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:45.641 12:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:47.553 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:47.553 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:48.124 nvme0n1 00:23:48.124 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:48.124 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.124 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.124 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.124 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:48.124 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:48.695 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:48.957 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:48.957 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:48.957 12:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.218 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:49.478 request: 00:23:49.478 { 00:23:49.478 "name": "nvme0", 00:23:49.478 "dhchap_key": "key1", 00:23:49.478 "dhchap_ctrlr_key": "key3", 00:23:49.478 "method": "bdev_nvme_set_keys", 00:23:49.478 "req_id": 1 00:23:49.478 } 00:23:49.478 Got JSON-RPC error response 00:23:49.478 response: 00:23:49.478 { 00:23:49.478 "code": -13, 00:23:49.478 "message": "Permission denied" 00:23:49.478 } 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:49.478 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:49.739 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:49.739 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:50.702 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:50.702 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:50.702 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:50.977 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:51.576 nvme0n1 00:23:51.576 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:51.576 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.576 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:51.837 12:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:52.097 request: 00:23:52.097 { 00:23:52.097 "name": "nvme0", 00:23:52.097 "dhchap_key": "key2", 00:23:52.097 "dhchap_ctrlr_key": "key0", 00:23:52.097 "method": "bdev_nvme_set_keys", 00:23:52.097 "req_id": 1 00:23:52.097 } 00:23:52.097 Got JSON-RPC error response 00:23:52.097 response: 00:23:52.097 { 00:23:52.097 "code": -13, 00:23:52.097 "message": "Permission denied" 00:23:52.097 } 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:52.097 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:52.358 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:52.358 12:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:53.296 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:53.296 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:53.296 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1335681 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1335681 ']' 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1335681 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1335681 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1335681' 00:23:53.555 killing process with pid 1335681 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1335681 00:23:53.555 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1335681 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:53.815 rmmod nvme_tcp 00:23:53.815 rmmod nvme_fabrics 00:23:53.815 rmmod nvme_keyring 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 1361135 ']' 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 1361135 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1361135 ']' 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1361135 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1361135 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1361135' 00:23:53.815 killing process with pid 1361135 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1361135 00:23:53.815 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1361135 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:54.076 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:55.990 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:55.990 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:55.990 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:23:55.990 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:23:55.991 12:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.rsA /tmp/spdk.key-sha256.dkV /tmp/spdk.key-sha384.eog /tmp/spdk.key-sha512.Or6 /tmp/spdk.key-sha512.AUS /tmp/spdk.key-sha384.gIz /tmp/spdk.key-sha256.pVe '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:55.991 00:23:55.991 real 2m32.760s 00:23:55.991 user 5m44.072s 00:23:55.991 sys 0m22.263s 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:55.991 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:55.991 ************************************ 00:23:55.991 END TEST nvmf_auth_target 00:23:55.991 ************************************ 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:56.252 ************************************ 00:23:56.252 START TEST nvmf_bdevio_no_huge 00:23:56.252 ************************************ 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:56.252 * Looking for test storage... 00:23:56.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.252 --rc genhtml_branch_coverage=1 00:23:56.252 --rc genhtml_function_coverage=1 00:23:56.252 --rc genhtml_legend=1 00:23:56.252 --rc geninfo_all_blocks=1 00:23:56.252 --rc geninfo_unexecuted_blocks=1 00:23:56.252 00:23:56.252 ' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.252 --rc genhtml_branch_coverage=1 00:23:56.252 --rc genhtml_function_coverage=1 00:23:56.252 --rc genhtml_legend=1 00:23:56.252 --rc geninfo_all_blocks=1 00:23:56.252 --rc geninfo_unexecuted_blocks=1 00:23:56.252 00:23:56.252 ' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.252 --rc genhtml_branch_coverage=1 00:23:56.252 --rc genhtml_function_coverage=1 00:23:56.252 --rc genhtml_legend=1 00:23:56.252 --rc geninfo_all_blocks=1 00:23:56.252 --rc geninfo_unexecuted_blocks=1 00:23:56.252 00:23:56.252 ' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:56.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:56.252 --rc genhtml_branch_coverage=1 00:23:56.252 --rc genhtml_function_coverage=1 00:23:56.252 --rc genhtml_legend=1 00:23:56.252 --rc geninfo_all_blocks=1 00:23:56.252 --rc geninfo_unexecuted_blocks=1 00:23:56.252 00:23:56.252 ' 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:56.252 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.515 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:56.516 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:23:56.516 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:04.658 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:04.658 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:04.658 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:04.659 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:04.659 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@247 -- # create_target_ns 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:04.659 10.0.0.1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:04.659 10.0.0.2 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:04.659 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:04.660 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:04.660 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.670 ms 00:24:04.660 00:24:04.660 --- 10.0.0.1 ping statistics --- 00:24:04.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.660 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:04.660 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:04.660 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:24:04.660 00:24:04.660 --- 10.0.0.2 ping statistics --- 00:24:04.660 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:04.660 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:04.660 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:04.661 ' 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=1369310 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 1369310 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1369310 ']' 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:04.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:04.661 12:07:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.661 [2024-12-05 12:07:28.968049] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:04.661 [2024-12-05 12:07:28.968123] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:04.661 [2024-12-05 12:07:29.075089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.661 [2024-12-05 12:07:29.135864] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.661 [2024-12-05 12:07:29.135915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.661 [2024-12-05 12:07:29.135923] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.661 [2024-12-05 12:07:29.135933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.661 [2024-12-05 12:07:29.135940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.661 [2024-12-05 12:07:29.137498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:04.661 [2024-12-05 12:07:29.137599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:24:04.661 [2024-12-05 12:07:29.137751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:24:04.661 [2024-12-05 12:07:29.137753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.921 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.921 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:24:04.921 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:04.921 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:04.921 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.922 [2024-12-05 12:07:29.840299] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.922 Malloc0 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:04.922 [2024-12-05 12:07:29.894093] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:04.922 { 00:24:04.922 "params": { 00:24:04.922 "name": "Nvme$subsystem", 00:24:04.922 "trtype": "$TEST_TRANSPORT", 00:24:04.922 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:04.922 "adrfam": "ipv4", 00:24:04.922 "trsvcid": "$NVMF_PORT", 00:24:04.922 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:04.922 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:04.922 "hdgst": ${hdgst:-false}, 00:24:04.922 "ddgst": ${ddgst:-false} 00:24:04.922 }, 00:24:04.922 "method": "bdev_nvme_attach_controller" 00:24:04.922 } 00:24:04.922 EOF 00:24:04.922 )") 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:24:04.922 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:04.922 "params": { 00:24:04.922 "name": "Nvme1", 00:24:04.922 "trtype": "tcp", 00:24:04.922 "traddr": "10.0.0.2", 00:24:04.922 "adrfam": "ipv4", 00:24:04.922 "trsvcid": "4420", 00:24:04.922 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:04.922 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:04.922 "hdgst": false, 00:24:04.922 "ddgst": false 00:24:04.922 }, 00:24:04.922 "method": "bdev_nvme_attach_controller" 00:24:04.922 }' 00:24:04.922 [2024-12-05 12:07:29.952894] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:04.922 [2024-12-05 12:07:29.952968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1369493 ] 00:24:05.182 [2024-12-05 12:07:30.051189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:05.182 [2024-12-05 12:07:30.114230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.182 [2024-12-05 12:07:30.114390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.182 [2024-12-05 12:07:30.114390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.442 I/O targets: 00:24:05.442 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:05.442 00:24:05.442 00:24:05.442 CUnit - A unit testing framework for C - Version 2.1-3 00:24:05.442 http://cunit.sourceforge.net/ 00:24:05.442 00:24:05.442 00:24:05.442 Suite: bdevio tests on: Nvme1n1 00:24:05.442 Test: blockdev write read block ...passed 00:24:05.442 Test: blockdev write zeroes read block ...passed 00:24:05.442 Test: blockdev write zeroes read no split ...passed 00:24:05.703 Test: blockdev write zeroes read split ...passed 00:24:05.703 Test: blockdev write zeroes read split partial ...passed 00:24:05.703 Test: blockdev reset ...[2024-12-05 12:07:30.521377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:05.703 [2024-12-05 12:07:30.521497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x253f810 (9): Bad file descriptor 00:24:05.703 [2024-12-05 12:07:30.533399] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:24:05.703 passed 00:24:05.703 Test: blockdev write read 8 blocks ...passed 00:24:05.703 Test: blockdev write read size > 128k ...passed 00:24:05.703 Test: blockdev write read invalid size ...passed 00:24:05.703 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:05.703 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:05.703 Test: blockdev write read max offset ...passed 00:24:05.703 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:05.703 Test: blockdev writev readv 8 blocks ...passed 00:24:05.703 Test: blockdev writev readv 30 x 1block ...passed 00:24:05.703 Test: blockdev writev readv block ...passed 00:24:05.703 Test: blockdev writev readv size > 128k ...passed 00:24:05.703 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:05.703 Test: blockdev comparev and writev ...[2024-12-05 12:07:30.710415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.710470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.710488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.710497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.710817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.710830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.710844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.710855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.711128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.711140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.711154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.711165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.711469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.711484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:05.703 [2024-12-05 12:07:30.711499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:05.703 [2024-12-05 12:07:30.711507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:05.964 passed 00:24:05.964 Test: blockdev nvme passthru rw ...passed 00:24:05.964 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:07:30.794703] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.964 [2024-12-05 12:07:30.794721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:05.964 [2024-12-05 12:07:30.794825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.964 [2024-12-05 12:07:30.794838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:05.964 [2024-12-05 12:07:30.794941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.964 [2024-12-05 12:07:30.794952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:05.964 [2024-12-05 12:07:30.795060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:05.964 [2024-12-05 12:07:30.795071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:05.964 passed 00:24:05.964 Test: blockdev nvme admin passthru ...passed 00:24:05.964 Test: blockdev copy ...passed 00:24:05.964 00:24:05.964 Run Summary: Type Total Ran Passed Failed Inactive 00:24:05.964 suites 1 1 n/a 0 0 00:24:05.964 tests 23 23 23 0 0 00:24:05.964 asserts 152 152 152 0 n/a 00:24:05.964 00:24:05.964 Elapsed time = 1.045 seconds 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:06.224 rmmod nvme_tcp 00:24:06.224 rmmod nvme_fabrics 00:24:06.224 rmmod nvme_keyring 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 1369310 ']' 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 1369310 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1369310 ']' 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1369310 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.224 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1369310 00:24:06.484 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:24:06.484 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:24:06.484 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1369310' 00:24:06.484 killing process with pid 1369310 00:24:06.484 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1369310 00:24:06.484 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1369310 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:06.744 12:07:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:08.657 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:24:08.918 00:24:08.919 real 0m12.605s 00:24:08.919 user 0m13.975s 00:24:08.919 sys 0m6.815s 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:24:08.919 ************************************ 00:24:08.919 END TEST nvmf_bdevio_no_huge 00:24:08.919 ************************************ 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:08.919 ************************************ 00:24:08.919 START TEST nvmf_tls 00:24:08.919 ************************************ 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:08.919 * Looking for test storage... 00:24:08.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:24:08.919 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.181 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:09.181 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.181 --rc genhtml_branch_coverage=1 00:24:09.181 --rc genhtml_function_coverage=1 00:24:09.181 --rc genhtml_legend=1 00:24:09.181 --rc geninfo_all_blocks=1 00:24:09.182 --rc geninfo_unexecuted_blocks=1 00:24:09.182 00:24:09.182 ' 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:09.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.182 --rc genhtml_branch_coverage=1 00:24:09.182 --rc genhtml_function_coverage=1 00:24:09.182 --rc genhtml_legend=1 00:24:09.182 --rc geninfo_all_blocks=1 00:24:09.182 --rc geninfo_unexecuted_blocks=1 00:24:09.182 00:24:09.182 ' 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:09.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.182 --rc genhtml_branch_coverage=1 00:24:09.182 --rc genhtml_function_coverage=1 00:24:09.182 --rc genhtml_legend=1 00:24:09.182 --rc geninfo_all_blocks=1 00:24:09.182 --rc geninfo_unexecuted_blocks=1 00:24:09.182 00:24:09.182 ' 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:09.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.182 --rc genhtml_branch_coverage=1 00:24:09.182 --rc genhtml_function_coverage=1 00:24:09.182 --rc genhtml_legend=1 00:24:09.182 --rc geninfo_all_blocks=1 00:24:09.182 --rc geninfo_unexecuted_blocks=1 00:24:09.182 00:24:09.182 ' 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.182 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:09.182 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:24:09.182 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:17.339 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:17.339 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:17.339 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:17.339 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:17.340 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@247 -- # create_target_ns 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:17.340 10.0.0.1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:17.340 10.0.0.2 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:17.340 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:17.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:17.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.588 ms 00:24:17.341 00:24:17.341 --- 10.0.0.1 ping statistics --- 00:24:17.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.341 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:17.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:17.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:24:17.341 00:24:17.341 --- 10.0.0.2 ping statistics --- 00:24:17.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:17.341 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:24:17.341 ' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.341 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1374106 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1374106 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1374106 ']' 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:17.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.342 12:07:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.342 [2024-12-05 12:07:41.756544] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:17.342 [2024-12-05 12:07:41.756610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:17.342 [2024-12-05 12:07:41.858014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.342 [2024-12-05 12:07:41.908464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:17.342 [2024-12-05 12:07:41.908513] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:17.342 [2024-12-05 12:07:41.908522] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:17.342 [2024-12-05 12:07:41.908529] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:17.342 [2024-12-05 12:07:41.908535] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:17.342 [2024-12-05 12:07:41.909308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:24:17.623 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:17.884 true 00:24:17.884 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:17.884 12:07:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:24:18.145 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:24:18.145 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:24:18.145 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:18.406 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:18.406 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:24:18.406 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:24:18.406 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:24:18.406 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:18.667 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:18.667 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:24:18.927 12:07:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:19.187 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.188 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:24:19.448 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:24:19.448 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:24:19.448 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:19.707 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:19.707 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:24:19.707 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:24:19.707 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:24:19.708 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.pTChl9p6I6 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.u9xQdFdzck 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pTChl9p6I6 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.u9xQdFdzck 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:19.968 12:07:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:24:20.228 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.pTChl9p6I6 00:24:20.228 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.pTChl9p6I6 00:24:20.228 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:20.490 [2024-12-05 12:07:45.360220] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.490 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:20.749 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:20.749 [2024-12-05 12:07:45.681036] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:20.749 [2024-12-05 12:07:45.681231] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.749 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:21.008 malloc0 00:24:21.008 12:07:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:21.008 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.pTChl9p6I6 00:24:21.269 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.530 12:07:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.pTChl9p6I6 00:24:31.531 Initializing NVMe Controllers 00:24:31.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:31.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:31.531 Initialization complete. Launching workers. 00:24:31.531 ======================================================== 00:24:31.531 Latency(us) 00:24:31.531 Device Information : IOPS MiB/s Average min max 00:24:31.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18337.15 71.63 3490.40 1145.26 204119.55 00:24:31.531 ======================================================== 00:24:31.531 Total : 18337.15 71.63 3490.40 1145.26 204119.55 00:24:31.531 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTChl9p6I6 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pTChl9p6I6 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1376922 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1376922 /var/tmp/bdevperf.sock 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1376922 ']' 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:31.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:31.531 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:31.531 [2024-12-05 12:07:56.503763] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:31.531 [2024-12-05 12:07:56.503807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1376922 ] 00:24:31.793 [2024-12-05 12:07:56.583641] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.793 [2024-12-05 12:07:56.618672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:31.793 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:31.793 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:31.793 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pTChl9p6I6 00:24:32.054 12:07:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:32.054 [2024-12-05 12:07:57.021670] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:32.054 TLSTESTn1 00:24:32.315 12:07:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:32.315 Running I/O for 10 seconds... 00:24:34.201 4745.00 IOPS, 18.54 MiB/s [2024-12-05T11:08:00.636Z] 4545.00 IOPS, 17.75 MiB/s [2024-12-05T11:08:01.581Z] 4974.00 IOPS, 19.43 MiB/s [2024-12-05T11:08:02.518Z] 5181.50 IOPS, 20.24 MiB/s [2024-12-05T11:08:03.460Z] 5412.20 IOPS, 21.14 MiB/s [2024-12-05T11:08:04.402Z] 5456.67 IOPS, 21.32 MiB/s [2024-12-05T11:08:05.342Z] 5501.00 IOPS, 21.49 MiB/s [2024-12-05T11:08:06.281Z] 5552.25 IOPS, 21.69 MiB/s [2024-12-05T11:08:07.665Z] 5561.44 IOPS, 21.72 MiB/s [2024-12-05T11:08:07.665Z] 5576.40 IOPS, 21.78 MiB/s 00:24:42.616 Latency(us) 00:24:42.616 [2024-12-05T11:08:07.665Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.616 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:42.616 Verification LBA range: start 0x0 length 0x2000 00:24:42.616 TLSTESTn1 : 10.02 5579.07 21.79 0.00 0.00 22905.19 5570.56 39540.05 00:24:42.616 [2024-12-05T11:08:07.665Z] =================================================================================================================== 00:24:42.616 [2024-12-05T11:08:07.665Z] Total : 5579.07 21.79 0.00 0.00 22905.19 5570.56 39540.05 00:24:42.616 { 00:24:42.616 "results": [ 00:24:42.616 { 00:24:42.616 "job": "TLSTESTn1", 00:24:42.616 "core_mask": "0x4", 00:24:42.616 "workload": "verify", 00:24:42.616 "status": "finished", 00:24:42.616 "verify_range": { 00:24:42.616 "start": 0, 00:24:42.616 "length": 8192 00:24:42.617 }, 00:24:42.617 "queue_depth": 128, 00:24:42.617 "io_size": 4096, 00:24:42.617 "runtime": 10.017982, 00:24:42.617 "iops": 5579.0677204251315, 00:24:42.617 "mibps": 21.79323328291067, 00:24:42.617 "io_failed": 0, 00:24:42.617 "io_timeout": 0, 00:24:42.617 "avg_latency_us": 22905.1919295295, 00:24:42.617 "min_latency_us": 5570.56, 00:24:42.617 "max_latency_us": 39540.05333333334 00:24:42.617 } 00:24:42.617 ], 00:24:42.617 "core_count": 1 00:24:42.617 } 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1376922 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1376922 ']' 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1376922 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1376922 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1376922' 00:24:42.617 killing process with pid 1376922 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1376922 00:24:42.617 Received shutdown signal, test time was about 10.000000 seconds 00:24:42.617 00:24:42.617 Latency(us) 00:24:42.617 [2024-12-05T11:08:07.666Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.617 [2024-12-05T11:08:07.666Z] =================================================================================================================== 00:24:42.617 [2024-12-05T11:08:07.666Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1376922 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.u9xQdFdzck 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.u9xQdFdzck 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.u9xQdFdzck 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.u9xQdFdzck 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379065 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379065 /var/tmp/bdevperf.sock 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379065 ']' 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.617 [2024-12-05 12:08:07.470963] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:42.617 [2024-12-05 12:08:07.471008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379065 ] 00:24:42.617 [2024-12-05 12:08:07.520812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.617 [2024-12-05 12:08:07.549217] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:42.617 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.u9xQdFdzck 00:24:42.878 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:43.140 [2024-12-05 12:08:07.943284] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:43.140 [2024-12-05 12:08:07.950771] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:43.140 [2024-12-05 12:08:07.951407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba0be0 (107): Transport endpoint is not connected 00:24:43.140 [2024-12-05 12:08:07.952402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba0be0 (9): Bad file descriptor 00:24:43.140 [2024-12-05 12:08:07.953405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:43.140 [2024-12-05 12:08:07.953413] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:43.140 [2024-12-05 12:08:07.953418] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:43.140 [2024-12-05 12:08:07.953424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:43.140 request: 00:24:43.140 { 00:24:43.140 "name": "TLSTEST", 00:24:43.140 "trtype": "tcp", 00:24:43.140 "traddr": "10.0.0.2", 00:24:43.140 "adrfam": "ipv4", 00:24:43.140 "trsvcid": "4420", 00:24:43.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:43.140 "prchk_reftag": false, 00:24:43.140 "prchk_guard": false, 00:24:43.140 "hdgst": false, 00:24:43.140 "ddgst": false, 00:24:43.140 "psk": "key0", 00:24:43.140 "allow_unrecognized_csi": false, 00:24:43.140 "method": "bdev_nvme_attach_controller", 00:24:43.140 "req_id": 1 00:24:43.140 } 00:24:43.140 Got JSON-RPC error response 00:24:43.140 response: 00:24:43.140 { 00:24:43.140 "code": -5, 00:24:43.140 "message": "Input/output error" 00:24:43.140 } 00:24:43.140 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379065 00:24:43.140 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379065 ']' 00:24:43.140 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379065 00:24:43.140 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:43.140 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:43.140 12:08:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379065 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379065' 00:24:43.140 killing process with pid 1379065 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379065 00:24:43.140 Received shutdown signal, test time was about 10.000000 seconds 00:24:43.140 00:24:43.140 Latency(us) 00:24:43.140 [2024-12-05T11:08:08.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:43.140 [2024-12-05T11:08:08.189Z] =================================================================================================================== 00:24:43.140 [2024-12-05T11:08:08.189Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379065 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pTChl9p6I6 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pTChl9p6I6 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.pTChl9p6I6 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pTChl9p6I6 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379269 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379269 /var/tmp/bdevperf.sock 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379269 ']' 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.140 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:43.140 [2024-12-05 12:08:08.186049] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:43.140 [2024-12-05 12:08:08.186106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379269 ] 00:24:43.401 [2024-12-05 12:08:08.270294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:43.401 [2024-12-05 12:08:08.298295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.973 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.973 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:43.973 12:08:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pTChl9p6I6 00:24:44.233 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:44.494 [2024-12-05 12:08:09.325774] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:44.494 [2024-12-05 12:08:09.334042] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:44.494 [2024-12-05 12:08:09.334062] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:44.494 [2024-12-05 12:08:09.334082] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:44.494 [2024-12-05 12:08:09.335055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0abe0 (107): Transport endpoint is not connected 00:24:44.494 [2024-12-05 12:08:09.336050] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b0abe0 (9): Bad file descriptor 00:24:44.494 [2024-12-05 12:08:09.337052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:44.494 [2024-12-05 12:08:09.337061] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:44.494 [2024-12-05 12:08:09.337067] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:44.494 [2024-12-05 12:08:09.337074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:44.494 request: 00:24:44.494 { 00:24:44.494 "name": "TLSTEST", 00:24:44.494 "trtype": "tcp", 00:24:44.494 "traddr": "10.0.0.2", 00:24:44.494 "adrfam": "ipv4", 00:24:44.494 "trsvcid": "4420", 00:24:44.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:44.494 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:44.494 "prchk_reftag": false, 00:24:44.494 "prchk_guard": false, 00:24:44.494 "hdgst": false, 00:24:44.494 "ddgst": false, 00:24:44.494 "psk": "key0", 00:24:44.494 "allow_unrecognized_csi": false, 00:24:44.494 "method": "bdev_nvme_attach_controller", 00:24:44.494 "req_id": 1 00:24:44.494 } 00:24:44.494 Got JSON-RPC error response 00:24:44.494 response: 00:24:44.494 { 00:24:44.494 "code": -5, 00:24:44.494 "message": "Input/output error" 00:24:44.494 } 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379269 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379269 ']' 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379269 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379269 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379269' 00:24:44.494 killing process with pid 1379269 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379269 00:24:44.494 Received shutdown signal, test time was about 10.000000 seconds 00:24:44.494 00:24:44.494 Latency(us) 00:24:44.494 [2024-12-05T11:08:09.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:44.494 [2024-12-05T11:08:09.543Z] =================================================================================================================== 00:24:44.494 [2024-12-05T11:08:09.543Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379269 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:44.494 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTChl9p6I6 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTChl9p6I6 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.pTChl9p6I6 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.pTChl9p6I6 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379584 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379584 /var/tmp/bdevperf.sock 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379584 ']' 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:44.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:44.495 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:44.755 [2024-12-05 12:08:09.555851] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:44.755 [2024-12-05 12:08:09.555896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379584 ] 00:24:44.755 [2024-12-05 12:08:09.605219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:44.755 [2024-12-05 12:08:09.633572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.755 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:44.755 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:44.755 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.pTChl9p6I6 00:24:45.016 12:08:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:45.016 [2024-12-05 12:08:10.027695] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:45.016 [2024-12-05 12:08:10.035752] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:45.016 [2024-12-05 12:08:10.035771] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:45.016 [2024-12-05 12:08:10.035791] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:45.016 [2024-12-05 12:08:10.036770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2190be0 (107): Transport endpoint is not connected 00:24:45.016 [2024-12-05 12:08:10.037765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2190be0 (9): Bad file descriptor 00:24:45.017 [2024-12-05 12:08:10.038767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:45.017 [2024-12-05 12:08:10.038776] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:45.017 [2024-12-05 12:08:10.038782] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:45.017 [2024-12-05 12:08:10.038792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:45.017 request: 00:24:45.017 { 00:24:45.017 "name": "TLSTEST", 00:24:45.017 "trtype": "tcp", 00:24:45.017 "traddr": "10.0.0.2", 00:24:45.017 "adrfam": "ipv4", 00:24:45.017 "trsvcid": "4420", 00:24:45.017 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:45.017 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:45.017 "prchk_reftag": false, 00:24:45.017 "prchk_guard": false, 00:24:45.017 "hdgst": false, 00:24:45.017 "ddgst": false, 00:24:45.017 "psk": "key0", 00:24:45.017 "allow_unrecognized_csi": false, 00:24:45.017 "method": "bdev_nvme_attach_controller", 00:24:45.017 "req_id": 1 00:24:45.017 } 00:24:45.017 Got JSON-RPC error response 00:24:45.017 response: 00:24:45.017 { 00:24:45.017 "code": -5, 00:24:45.017 "message": "Input/output error" 00:24:45.017 } 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379584 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379584 ']' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379584 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379584 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379584' 00:24:45.279 killing process with pid 1379584 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379584 00:24:45.279 Received shutdown signal, test time was about 10.000000 seconds 00:24:45.279 00:24:45.279 Latency(us) 00:24:45.279 [2024-12-05T11:08:10.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.279 [2024-12-05T11:08:10.328Z] =================================================================================================================== 00:24:45.279 [2024-12-05T11:08:10.328Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379584 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1379628 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1379628 /var/tmp/bdevperf.sock 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379628 ']' 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:45.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:45.279 12:08:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:45.279 [2024-12-05 12:08:10.283779] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:45.279 [2024-12-05 12:08:10.283834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379628 ] 00:24:45.540 [2024-12-05 12:08:10.368339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.540 [2024-12-05 12:08:10.396367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.112 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:46.112 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:46.112 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:46.374 [2024-12-05 12:08:11.235316] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:46.374 [2024-12-05 12:08:11.235344] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:46.374 request: 00:24:46.374 { 00:24:46.374 "name": "key0", 00:24:46.374 "path": "", 00:24:46.374 "method": "keyring_file_add_key", 00:24:46.374 "req_id": 1 00:24:46.374 } 00:24:46.374 Got JSON-RPC error response 00:24:46.374 response: 00:24:46.374 { 00:24:46.374 "code": -1, 00:24:46.374 "message": "Operation not permitted" 00:24:46.374 } 00:24:46.374 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:46.374 [2024-12-05 12:08:11.419857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:46.374 [2024-12-05 12:08:11.419882] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:46.635 request: 00:24:46.635 { 00:24:46.635 "name": "TLSTEST", 00:24:46.635 "trtype": "tcp", 00:24:46.635 "traddr": "10.0.0.2", 00:24:46.635 "adrfam": "ipv4", 00:24:46.635 "trsvcid": "4420", 00:24:46.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:46.635 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:46.635 "prchk_reftag": false, 00:24:46.635 "prchk_guard": false, 00:24:46.635 "hdgst": false, 00:24:46.635 "ddgst": false, 00:24:46.635 "psk": "key0", 00:24:46.635 "allow_unrecognized_csi": false, 00:24:46.635 "method": "bdev_nvme_attach_controller", 00:24:46.635 "req_id": 1 00:24:46.635 } 00:24:46.635 Got JSON-RPC error response 00:24:46.635 response: 00:24:46.635 { 00:24:46.635 "code": -126, 00:24:46.635 "message": "Required key not available" 00:24:46.635 } 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1379628 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379628 ']' 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379628 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379628 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379628' 00:24:46.635 killing process with pid 1379628 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379628 00:24:46.635 Received shutdown signal, test time was about 10.000000 seconds 00:24:46.635 00:24:46.635 Latency(us) 00:24:46.635 [2024-12-05T11:08:11.684Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.635 [2024-12-05T11:08:11.684Z] =================================================================================================================== 00:24:46.635 [2024-12-05T11:08:11.684Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379628 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1374106 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1374106 ']' 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1374106 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1374106 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1374106' 00:24:46.635 killing process with pid 1374106 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1374106 00:24:46.635 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1374106 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Bhw1tF3Os6 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Bhw1tF3Os6 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1379984 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1379984 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1379984 ']' 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:46.897 12:08:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:46.897 [2024-12-05 12:08:11.901112] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:46.897 [2024-12-05 12:08:11.901170] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.159 [2024-12-05 12:08:11.993014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.159 [2024-12-05 12:08:12.022558] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.159 [2024-12-05 12:08:12.022590] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.159 [2024-12-05 12:08:12.022595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.159 [2024-12-05 12:08:12.022600] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.159 [2024-12-05 12:08:12.022605] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.159 [2024-12-05 12:08:12.023069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Bhw1tF3Os6 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Bhw1tF3Os6 00:24:47.730 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:47.991 [2024-12-05 12:08:12.895566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.991 12:08:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:48.250 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:48.250 [2024-12-05 12:08:13.212335] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:48.250 [2024-12-05 12:08:13.212533] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:48.250 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:48.545 malloc0 00:24:48.545 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:48.545 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bhw1tF3Os6 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Bhw1tF3Os6 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1380355 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1380355 /var/tmp/bdevperf.sock 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1380355 ']' 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:48.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:48.853 12:08:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:49.114 [2024-12-05 12:08:13.943450] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:24:49.114 [2024-12-05 12:08:13.943508] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1380355 ] 00:24:49.114 [2024-12-05 12:08:14.024295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.114 [2024-12-05 12:08:14.053181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:49.686 12:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:49.686 12:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:49.686 12:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:24:49.947 12:08:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:50.207 [2024-12-05 12:08:15.052644] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.207 TLSTESTn1 00:24:50.207 12:08:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:50.207 Running I/O for 10 seconds... 00:24:52.540 6001.00 IOPS, 23.44 MiB/s [2024-12-05T11:08:18.533Z] 5895.00 IOPS, 23.03 MiB/s [2024-12-05T11:08:19.476Z] 5595.00 IOPS, 21.86 MiB/s [2024-12-05T11:08:20.418Z] 5618.75 IOPS, 21.95 MiB/s [2024-12-05T11:08:21.361Z] 5745.40 IOPS, 22.44 MiB/s [2024-12-05T11:08:22.304Z] 5655.17 IOPS, 22.09 MiB/s [2024-12-05T11:08:23.685Z] 5670.14 IOPS, 22.15 MiB/s [2024-12-05T11:08:24.679Z] 5604.00 IOPS, 21.89 MiB/s [2024-12-05T11:08:25.620Z] 5697.56 IOPS, 22.26 MiB/s [2024-12-05T11:08:25.620Z] 5604.20 IOPS, 21.89 MiB/s 00:25:00.571 Latency(us) 00:25:00.571 [2024-12-05T11:08:25.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.571 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.571 Verification LBA range: start 0x0 length 0x2000 00:25:00.571 TLSTESTn1 : 10.01 5610.66 21.92 0.00 0.00 22783.79 4423.68 27852.80 00:25:00.571 [2024-12-05T11:08:25.620Z] =================================================================================================================== 00:25:00.571 [2024-12-05T11:08:25.620Z] Total : 5610.66 21.92 0.00 0.00 22783.79 4423.68 27852.80 00:25:00.571 { 00:25:00.571 "results": [ 00:25:00.571 { 00:25:00.571 "job": "TLSTESTn1", 00:25:00.571 "core_mask": "0x4", 00:25:00.571 "workload": "verify", 00:25:00.571 "status": "finished", 00:25:00.571 "verify_range": { 00:25:00.571 "start": 0, 00:25:00.571 "length": 8192 00:25:00.571 }, 00:25:00.571 "queue_depth": 128, 00:25:00.571 "io_size": 4096, 00:25:00.571 "runtime": 10.011119, 00:25:00.571 "iops": 5610.661505472066, 00:25:00.571 "mibps": 21.916646505750258, 00:25:00.571 "io_failed": 0, 00:25:00.571 "io_timeout": 0, 00:25:00.571 "avg_latency_us": 22783.786640080234, 00:25:00.571 "min_latency_us": 4423.68, 00:25:00.571 "max_latency_us": 27852.8 00:25:00.571 } 00:25:00.571 ], 00:25:00.571 "core_count": 1 00:25:00.571 } 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1380355 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1380355 ']' 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1380355 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1380355 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1380355' 00:25:00.571 killing process with pid 1380355 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1380355 00:25:00.571 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.571 00:25:00.571 Latency(us) 00:25:00.571 [2024-12-05T11:08:25.620Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.571 [2024-12-05T11:08:25.620Z] =================================================================================================================== 00:25:00.571 [2024-12-05T11:08:25.620Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1380355 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Bhw1tF3Os6 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bhw1tF3Os6 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bhw1tF3Os6 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Bhw1tF3Os6 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Bhw1tF3Os6 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1382695 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1382695 /var/tmp/bdevperf.sock 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1382695 ']' 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:00.571 12:08:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.571 [2024-12-05 12:08:25.526601] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:00.571 [2024-12-05 12:08:25.526657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382695 ] 00:25:00.571 [2024-12-05 12:08:25.609805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.831 [2024-12-05 12:08:25.638716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.400 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:01.400 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:01.400 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:01.660 [2024-12-05 12:08:26.477979] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Bhw1tF3Os6': 0100666 00:25:01.660 [2024-12-05 12:08:26.478005] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:01.660 request: 00:25:01.660 { 00:25:01.660 "name": "key0", 00:25:01.660 "path": "/tmp/tmp.Bhw1tF3Os6", 00:25:01.660 "method": "keyring_file_add_key", 00:25:01.660 "req_id": 1 00:25:01.660 } 00:25:01.660 Got JSON-RPC error response 00:25:01.660 response: 00:25:01.660 { 00:25:01.660 "code": -1, 00:25:01.660 "message": "Operation not permitted" 00:25:01.660 } 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:01.660 [2024-12-05 12:08:26.662512] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:01.660 [2024-12-05 12:08:26.662540] bdev_nvme.c:6749:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:25:01.660 request: 00:25:01.660 { 00:25:01.660 "name": "TLSTEST", 00:25:01.660 "trtype": "tcp", 00:25:01.660 "traddr": "10.0.0.2", 00:25:01.660 "adrfam": "ipv4", 00:25:01.660 "trsvcid": "4420", 00:25:01.660 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.660 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.660 "prchk_reftag": false, 00:25:01.660 "prchk_guard": false, 00:25:01.660 "hdgst": false, 00:25:01.660 "ddgst": false, 00:25:01.660 "psk": "key0", 00:25:01.660 "allow_unrecognized_csi": false, 00:25:01.660 "method": "bdev_nvme_attach_controller", 00:25:01.660 "req_id": 1 00:25:01.660 } 00:25:01.660 Got JSON-RPC error response 00:25:01.660 response: 00:25:01.660 { 00:25:01.660 "code": -126, 00:25:01.660 "message": "Required key not available" 00:25:01.660 } 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1382695 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1382695 ']' 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1382695 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.660 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1382695 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1382695' 00:25:01.921 killing process with pid 1382695 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1382695 00:25:01.921 Received shutdown signal, test time was about 10.000000 seconds 00:25:01.921 00:25:01.921 Latency(us) 00:25:01.921 [2024-12-05T11:08:26.970Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.921 [2024-12-05T11:08:26.970Z] =================================================================================================================== 00:25:01.921 [2024-12-05T11:08:26.970Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1382695 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1379984 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1379984 ']' 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1379984 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1379984 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1379984' 00:25:01.921 killing process with pid 1379984 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1379984 00:25:01.921 12:08:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1379984 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1383047 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1383047 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1383047 ']' 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.182 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.182 [2024-12-05 12:08:27.099494] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:02.182 [2024-12-05 12:08:27.099552] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:02.182 [2024-12-05 12:08:27.188759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.182 [2024-12-05 12:08:27.217721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:02.182 [2024-12-05 12:08:27.217752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:02.182 [2024-12-05 12:08:27.217758] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:02.182 [2024-12-05 12:08:27.217763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:02.182 [2024-12-05 12:08:27.217767] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:02.182 [2024-12-05 12:08:27.218224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Bhw1tF3Os6 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Bhw1tF3Os6 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Bhw1tF3Os6 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Bhw1tF3Os6 00:25:03.124 12:08:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:03.124 [2024-12-05 12:08:28.086617] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.124 12:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:03.386 12:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:03.648 [2024-12-05 12:08:28.451512] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:03.648 [2024-12-05 12:08:28.451710] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.648 12:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:03.648 malloc0 00:25:03.648 12:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:03.909 12:08:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:04.170 [2024-12-05 12:08:28.982748] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Bhw1tF3Os6': 0100666 00:25:04.170 [2024-12-05 12:08:28.982770] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:25:04.170 request: 00:25:04.170 { 00:25:04.170 "name": "key0", 00:25:04.170 "path": "/tmp/tmp.Bhw1tF3Os6", 00:25:04.170 "method": "keyring_file_add_key", 00:25:04.170 "req_id": 1 00:25:04.170 } 00:25:04.170 Got JSON-RPC error response 00:25:04.170 response: 00:25:04.170 { 00:25:04.170 "code": -1, 00:25:04.170 "message": "Operation not permitted" 00:25:04.170 } 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:04.170 [2024-12-05 12:08:29.159206] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:25:04.170 [2024-12-05 12:08:29.159234] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:25:04.170 request: 00:25:04.170 { 00:25:04.170 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.170 "host": "nqn.2016-06.io.spdk:host1", 00:25:04.170 "psk": "key0", 00:25:04.170 "method": "nvmf_subsystem_add_host", 00:25:04.170 "req_id": 1 00:25:04.170 } 00:25:04.170 Got JSON-RPC error response 00:25:04.170 response: 00:25:04.170 { 00:25:04.170 "code": -32603, 00:25:04.170 "message": "Internal error" 00:25:04.170 } 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1383047 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1383047 ']' 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1383047 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.170 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383047 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383047' 00:25:04.431 killing process with pid 1383047 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1383047 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1383047 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Bhw1tF3Os6 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1383422 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1383422 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1383422 ']' 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.431 12:08:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:04.431 [2024-12-05 12:08:29.439480] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:04.432 [2024-12-05 12:08:29.439533] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.692 [2024-12-05 12:08:29.528078] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.692 [2024-12-05 12:08:29.556435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.692 [2024-12-05 12:08:29.556472] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.692 [2024-12-05 12:08:29.556477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.692 [2024-12-05 12:08:29.556486] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.692 [2024-12-05 12:08:29.556490] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.692 [2024-12-05 12:08:29.556954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Bhw1tF3Os6 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Bhw1tF3Os6 00:25:05.265 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:05.525 [2024-12-05 12:08:30.429045] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:05.525 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:05.785 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:05.785 [2024-12-05 12:08:30.781902] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:05.785 [2024-12-05 12:08:30.782089] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.785 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:06.045 malloc0 00:25:06.045 12:08:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:06.306 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:06.306 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1383803 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1383803 /var/tmp/bdevperf.sock 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1383803 ']' 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:06.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.567 12:08:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:06.567 [2024-12-05 12:08:31.562886] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:06.567 [2024-12-05 12:08:31.562940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383803 ] 00:25:06.828 [2024-12-05 12:08:31.650681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.828 [2024-12-05 12:08:31.685413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:07.399 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:07.399 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:07.399 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:07.660 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:07.660 [2024-12-05 12:08:32.698020] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:07.920 TLSTESTn1 00:25:07.920 12:08:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:25:08.183 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:25:08.183 "subsystems": [ 00:25:08.183 { 00:25:08.183 "subsystem": "keyring", 00:25:08.183 "config": [ 00:25:08.183 { 00:25:08.183 "method": "keyring_file_add_key", 00:25:08.183 "params": { 00:25:08.183 "name": "key0", 00:25:08.183 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:08.183 } 00:25:08.183 } 00:25:08.183 ] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "iobuf", 00:25:08.183 "config": [ 00:25:08.183 { 00:25:08.183 "method": "iobuf_set_options", 00:25:08.183 "params": { 00:25:08.183 "small_pool_count": 8192, 00:25:08.183 "large_pool_count": 1024, 00:25:08.183 "small_bufsize": 8192, 00:25:08.183 "large_bufsize": 135168, 00:25:08.183 "enable_numa": false 00:25:08.183 } 00:25:08.183 } 00:25:08.183 ] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "sock", 00:25:08.183 "config": [ 00:25:08.183 { 00:25:08.183 "method": "sock_set_default_impl", 00:25:08.183 "params": { 00:25:08.183 "impl_name": "posix" 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "sock_impl_set_options", 00:25:08.183 "params": { 00:25:08.183 "impl_name": "ssl", 00:25:08.183 "recv_buf_size": 4096, 00:25:08.183 "send_buf_size": 4096, 00:25:08.183 "enable_recv_pipe": true, 00:25:08.183 "enable_quickack": false, 00:25:08.183 "enable_placement_id": 0, 00:25:08.183 "enable_zerocopy_send_server": true, 00:25:08.183 "enable_zerocopy_send_client": false, 00:25:08.183 "zerocopy_threshold": 0, 00:25:08.183 "tls_version": 0, 00:25:08.183 "enable_ktls": false 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "sock_impl_set_options", 00:25:08.183 "params": { 00:25:08.183 "impl_name": "posix", 00:25:08.183 "recv_buf_size": 2097152, 00:25:08.183 "send_buf_size": 2097152, 00:25:08.183 "enable_recv_pipe": true, 00:25:08.183 "enable_quickack": false, 00:25:08.183 "enable_placement_id": 0, 00:25:08.183 "enable_zerocopy_send_server": true, 00:25:08.183 "enable_zerocopy_send_client": false, 00:25:08.183 "zerocopy_threshold": 0, 00:25:08.183 "tls_version": 0, 00:25:08.183 "enable_ktls": false 00:25:08.183 } 00:25:08.183 } 00:25:08.183 ] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "vmd", 00:25:08.183 "config": [] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "accel", 00:25:08.183 "config": [ 00:25:08.183 { 00:25:08.183 "method": "accel_set_options", 00:25:08.183 "params": { 00:25:08.183 "small_cache_size": 128, 00:25:08.183 "large_cache_size": 16, 00:25:08.183 "task_count": 2048, 00:25:08.183 "sequence_count": 2048, 00:25:08.183 "buf_count": 2048 00:25:08.183 } 00:25:08.183 } 00:25:08.183 ] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "bdev", 00:25:08.183 "config": [ 00:25:08.183 { 00:25:08.183 "method": "bdev_set_options", 00:25:08.183 "params": { 00:25:08.183 "bdev_io_pool_size": 65535, 00:25:08.183 "bdev_io_cache_size": 256, 00:25:08.183 "bdev_auto_examine": true, 00:25:08.183 "iobuf_small_cache_size": 128, 00:25:08.183 "iobuf_large_cache_size": 16 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "bdev_raid_set_options", 00:25:08.183 "params": { 00:25:08.183 "process_window_size_kb": 1024, 00:25:08.183 "process_max_bandwidth_mb_sec": 0 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "bdev_iscsi_set_options", 00:25:08.183 "params": { 00:25:08.183 "timeout_sec": 30 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "bdev_nvme_set_options", 00:25:08.183 "params": { 00:25:08.183 "action_on_timeout": "none", 00:25:08.183 "timeout_us": 0, 00:25:08.183 "timeout_admin_us": 0, 00:25:08.183 "keep_alive_timeout_ms": 10000, 00:25:08.183 "arbitration_burst": 0, 00:25:08.183 "low_priority_weight": 0, 00:25:08.183 "medium_priority_weight": 0, 00:25:08.183 "high_priority_weight": 0, 00:25:08.183 "nvme_adminq_poll_period_us": 10000, 00:25:08.183 "nvme_ioq_poll_period_us": 0, 00:25:08.183 "io_queue_requests": 0, 00:25:08.183 "delay_cmd_submit": true, 00:25:08.183 "transport_retry_count": 4, 00:25:08.183 "bdev_retry_count": 3, 00:25:08.183 "transport_ack_timeout": 0, 00:25:08.183 "ctrlr_loss_timeout_sec": 0, 00:25:08.183 "reconnect_delay_sec": 0, 00:25:08.183 "fast_io_fail_timeout_sec": 0, 00:25:08.183 "disable_auto_failback": false, 00:25:08.183 "generate_uuids": false, 00:25:08.183 "transport_tos": 0, 00:25:08.183 "nvme_error_stat": false, 00:25:08.183 "rdma_srq_size": 0, 00:25:08.183 "io_path_stat": false, 00:25:08.183 "allow_accel_sequence": false, 00:25:08.183 "rdma_max_cq_size": 0, 00:25:08.183 "rdma_cm_event_timeout_ms": 0, 00:25:08.183 "dhchap_digests": [ 00:25:08.183 "sha256", 00:25:08.183 "sha384", 00:25:08.183 "sha512" 00:25:08.183 ], 00:25:08.183 "dhchap_dhgroups": [ 00:25:08.183 "null", 00:25:08.183 "ffdhe2048", 00:25:08.183 "ffdhe3072", 00:25:08.183 "ffdhe4096", 00:25:08.183 "ffdhe6144", 00:25:08.183 "ffdhe8192" 00:25:08.183 ] 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "bdev_nvme_set_hotplug", 00:25:08.183 "params": { 00:25:08.183 "period_us": 100000, 00:25:08.183 "enable": false 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "bdev_malloc_create", 00:25:08.183 "params": { 00:25:08.183 "name": "malloc0", 00:25:08.183 "num_blocks": 8192, 00:25:08.183 "block_size": 4096, 00:25:08.183 "physical_block_size": 4096, 00:25:08.183 "uuid": "aa215ef3-37d2-4cab-bfcf-6da66d56243d", 00:25:08.183 "optimal_io_boundary": 0, 00:25:08.183 "md_size": 0, 00:25:08.183 "dif_type": 0, 00:25:08.183 "dif_is_head_of_md": false, 00:25:08.183 "dif_pi_format": 0 00:25:08.183 } 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "method": "bdev_wait_for_examine" 00:25:08.183 } 00:25:08.183 ] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "nbd", 00:25:08.183 "config": [] 00:25:08.183 }, 00:25:08.183 { 00:25:08.183 "subsystem": "scheduler", 00:25:08.183 "config": [ 00:25:08.183 { 00:25:08.183 "method": "framework_set_scheduler", 00:25:08.183 "params": { 00:25:08.184 "name": "static" 00:25:08.184 } 00:25:08.184 } 00:25:08.184 ] 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "subsystem": "nvmf", 00:25:08.184 "config": [ 00:25:08.184 { 00:25:08.184 "method": "nvmf_set_config", 00:25:08.184 "params": { 00:25:08.184 "discovery_filter": "match_any", 00:25:08.184 "admin_cmd_passthru": { 00:25:08.184 "identify_ctrlr": false 00:25:08.184 }, 00:25:08.184 "dhchap_digests": [ 00:25:08.184 "sha256", 00:25:08.184 "sha384", 00:25:08.184 "sha512" 00:25:08.184 ], 00:25:08.184 "dhchap_dhgroups": [ 00:25:08.184 "null", 00:25:08.184 "ffdhe2048", 00:25:08.184 "ffdhe3072", 00:25:08.184 "ffdhe4096", 00:25:08.184 "ffdhe6144", 00:25:08.184 "ffdhe8192" 00:25:08.184 ] 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_set_max_subsystems", 00:25:08.184 "params": { 00:25:08.184 "max_subsystems": 1024 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_set_crdt", 00:25:08.184 "params": { 00:25:08.184 "crdt1": 0, 00:25:08.184 "crdt2": 0, 00:25:08.184 "crdt3": 0 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_create_transport", 00:25:08.184 "params": { 00:25:08.184 "trtype": "TCP", 00:25:08.184 "max_queue_depth": 128, 00:25:08.184 "max_io_qpairs_per_ctrlr": 127, 00:25:08.184 "in_capsule_data_size": 4096, 00:25:08.184 "max_io_size": 131072, 00:25:08.184 "io_unit_size": 131072, 00:25:08.184 "max_aq_depth": 128, 00:25:08.184 "num_shared_buffers": 511, 00:25:08.184 "buf_cache_size": 4294967295, 00:25:08.184 "dif_insert_or_strip": false, 00:25:08.184 "zcopy": false, 00:25:08.184 "c2h_success": false, 00:25:08.184 "sock_priority": 0, 00:25:08.184 "abort_timeout_sec": 1, 00:25:08.184 "ack_timeout": 0, 00:25:08.184 "data_wr_pool_size": 0 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_create_subsystem", 00:25:08.184 "params": { 00:25:08.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.184 "allow_any_host": false, 00:25:08.184 "serial_number": "SPDK00000000000001", 00:25:08.184 "model_number": "SPDK bdev Controller", 00:25:08.184 "max_namespaces": 10, 00:25:08.184 "min_cntlid": 1, 00:25:08.184 "max_cntlid": 65519, 00:25:08.184 "ana_reporting": false 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_subsystem_add_host", 00:25:08.184 "params": { 00:25:08.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.184 "host": "nqn.2016-06.io.spdk:host1", 00:25:08.184 "psk": "key0" 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_subsystem_add_ns", 00:25:08.184 "params": { 00:25:08.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.184 "namespace": { 00:25:08.184 "nsid": 1, 00:25:08.184 "bdev_name": "malloc0", 00:25:08.184 "nguid": "AA215EF337D24CABBFCF6DA66D56243D", 00:25:08.184 "uuid": "aa215ef3-37d2-4cab-bfcf-6da66d56243d", 00:25:08.184 "no_auto_visible": false 00:25:08.184 } 00:25:08.184 } 00:25:08.184 }, 00:25:08.184 { 00:25:08.184 "method": "nvmf_subsystem_add_listener", 00:25:08.184 "params": { 00:25:08.184 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.184 "listen_address": { 00:25:08.184 "trtype": "TCP", 00:25:08.184 "adrfam": "IPv4", 00:25:08.184 "traddr": "10.0.0.2", 00:25:08.184 "trsvcid": "4420" 00:25:08.184 }, 00:25:08.184 "secure_channel": true 00:25:08.184 } 00:25:08.184 } 00:25:08.184 ] 00:25:08.184 } 00:25:08.184 ] 00:25:08.184 }' 00:25:08.184 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:08.446 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:25:08.446 "subsystems": [ 00:25:08.446 { 00:25:08.446 "subsystem": "keyring", 00:25:08.446 "config": [ 00:25:08.446 { 00:25:08.446 "method": "keyring_file_add_key", 00:25:08.446 "params": { 00:25:08.446 "name": "key0", 00:25:08.446 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:08.446 } 00:25:08.446 } 00:25:08.446 ] 00:25:08.446 }, 00:25:08.446 { 00:25:08.446 "subsystem": "iobuf", 00:25:08.446 "config": [ 00:25:08.446 { 00:25:08.446 "method": "iobuf_set_options", 00:25:08.446 "params": { 00:25:08.446 "small_pool_count": 8192, 00:25:08.446 "large_pool_count": 1024, 00:25:08.446 "small_bufsize": 8192, 00:25:08.446 "large_bufsize": 135168, 00:25:08.446 "enable_numa": false 00:25:08.446 } 00:25:08.446 } 00:25:08.446 ] 00:25:08.446 }, 00:25:08.446 { 00:25:08.446 "subsystem": "sock", 00:25:08.446 "config": [ 00:25:08.446 { 00:25:08.446 "method": "sock_set_default_impl", 00:25:08.446 "params": { 00:25:08.446 "impl_name": "posix" 00:25:08.446 } 00:25:08.446 }, 00:25:08.446 { 00:25:08.446 "method": "sock_impl_set_options", 00:25:08.446 "params": { 00:25:08.446 "impl_name": "ssl", 00:25:08.446 "recv_buf_size": 4096, 00:25:08.446 "send_buf_size": 4096, 00:25:08.446 "enable_recv_pipe": true, 00:25:08.446 "enable_quickack": false, 00:25:08.446 "enable_placement_id": 0, 00:25:08.446 "enable_zerocopy_send_server": true, 00:25:08.446 "enable_zerocopy_send_client": false, 00:25:08.447 "zerocopy_threshold": 0, 00:25:08.447 "tls_version": 0, 00:25:08.447 "enable_ktls": false 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "sock_impl_set_options", 00:25:08.447 "params": { 00:25:08.447 "impl_name": "posix", 00:25:08.447 "recv_buf_size": 2097152, 00:25:08.447 "send_buf_size": 2097152, 00:25:08.447 "enable_recv_pipe": true, 00:25:08.447 "enable_quickack": false, 00:25:08.447 "enable_placement_id": 0, 00:25:08.447 "enable_zerocopy_send_server": true, 00:25:08.447 "enable_zerocopy_send_client": false, 00:25:08.447 "zerocopy_threshold": 0, 00:25:08.447 "tls_version": 0, 00:25:08.447 "enable_ktls": false 00:25:08.447 } 00:25:08.447 } 00:25:08.447 ] 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "subsystem": "vmd", 00:25:08.447 "config": [] 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "subsystem": "accel", 00:25:08.447 "config": [ 00:25:08.447 { 00:25:08.447 "method": "accel_set_options", 00:25:08.447 "params": { 00:25:08.447 "small_cache_size": 128, 00:25:08.447 "large_cache_size": 16, 00:25:08.447 "task_count": 2048, 00:25:08.447 "sequence_count": 2048, 00:25:08.447 "buf_count": 2048 00:25:08.447 } 00:25:08.447 } 00:25:08.447 ] 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "subsystem": "bdev", 00:25:08.447 "config": [ 00:25:08.447 { 00:25:08.447 "method": "bdev_set_options", 00:25:08.447 "params": { 00:25:08.447 "bdev_io_pool_size": 65535, 00:25:08.447 "bdev_io_cache_size": 256, 00:25:08.447 "bdev_auto_examine": true, 00:25:08.447 "iobuf_small_cache_size": 128, 00:25:08.447 "iobuf_large_cache_size": 16 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "bdev_raid_set_options", 00:25:08.447 "params": { 00:25:08.447 "process_window_size_kb": 1024, 00:25:08.447 "process_max_bandwidth_mb_sec": 0 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "bdev_iscsi_set_options", 00:25:08.447 "params": { 00:25:08.447 "timeout_sec": 30 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "bdev_nvme_set_options", 00:25:08.447 "params": { 00:25:08.447 "action_on_timeout": "none", 00:25:08.447 "timeout_us": 0, 00:25:08.447 "timeout_admin_us": 0, 00:25:08.447 "keep_alive_timeout_ms": 10000, 00:25:08.447 "arbitration_burst": 0, 00:25:08.447 "low_priority_weight": 0, 00:25:08.447 "medium_priority_weight": 0, 00:25:08.447 "high_priority_weight": 0, 00:25:08.447 "nvme_adminq_poll_period_us": 10000, 00:25:08.447 "nvme_ioq_poll_period_us": 0, 00:25:08.447 "io_queue_requests": 512, 00:25:08.447 "delay_cmd_submit": true, 00:25:08.447 "transport_retry_count": 4, 00:25:08.447 "bdev_retry_count": 3, 00:25:08.447 "transport_ack_timeout": 0, 00:25:08.447 "ctrlr_loss_timeout_sec": 0, 00:25:08.447 "reconnect_delay_sec": 0, 00:25:08.447 "fast_io_fail_timeout_sec": 0, 00:25:08.447 "disable_auto_failback": false, 00:25:08.447 "generate_uuids": false, 00:25:08.447 "transport_tos": 0, 00:25:08.447 "nvme_error_stat": false, 00:25:08.447 "rdma_srq_size": 0, 00:25:08.447 "io_path_stat": false, 00:25:08.447 "allow_accel_sequence": false, 00:25:08.447 "rdma_max_cq_size": 0, 00:25:08.447 "rdma_cm_event_timeout_ms": 0, 00:25:08.447 "dhchap_digests": [ 00:25:08.447 "sha256", 00:25:08.447 "sha384", 00:25:08.447 "sha512" 00:25:08.447 ], 00:25:08.447 "dhchap_dhgroups": [ 00:25:08.447 "null", 00:25:08.447 "ffdhe2048", 00:25:08.447 "ffdhe3072", 00:25:08.447 "ffdhe4096", 00:25:08.447 "ffdhe6144", 00:25:08.447 "ffdhe8192" 00:25:08.447 ] 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "bdev_nvme_attach_controller", 00:25:08.447 "params": { 00:25:08.447 "name": "TLSTEST", 00:25:08.447 "trtype": "TCP", 00:25:08.447 "adrfam": "IPv4", 00:25:08.447 "traddr": "10.0.0.2", 00:25:08.447 "trsvcid": "4420", 00:25:08.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.447 "prchk_reftag": false, 00:25:08.447 "prchk_guard": false, 00:25:08.447 "ctrlr_loss_timeout_sec": 0, 00:25:08.447 "reconnect_delay_sec": 0, 00:25:08.447 "fast_io_fail_timeout_sec": 0, 00:25:08.447 "psk": "key0", 00:25:08.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:08.447 "hdgst": false, 00:25:08.447 "ddgst": false, 00:25:08.447 "multipath": "multipath" 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "bdev_nvme_set_hotplug", 00:25:08.447 "params": { 00:25:08.447 "period_us": 100000, 00:25:08.447 "enable": false 00:25:08.447 } 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "method": "bdev_wait_for_examine" 00:25:08.447 } 00:25:08.447 ] 00:25:08.447 }, 00:25:08.447 { 00:25:08.447 "subsystem": "nbd", 00:25:08.447 "config": [] 00:25:08.447 } 00:25:08.447 ] 00:25:08.447 }' 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1383803 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1383803 ']' 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1383803 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383803 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383803' 00:25:08.447 killing process with pid 1383803 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1383803 00:25:08.447 Received shutdown signal, test time was about 10.000000 seconds 00:25:08.447 00:25:08.447 Latency(us) 00:25:08.447 [2024-12-05T11:08:33.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.447 [2024-12-05T11:08:33.496Z] =================================================================================================================== 00:25:08.447 [2024-12-05T11:08:33.496Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1383803 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1383422 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1383422 ']' 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1383422 00:25:08.447 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1383422 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1383422' 00:25:08.709 killing process with pid 1383422 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1383422 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1383422 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.709 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:25:08.709 "subsystems": [ 00:25:08.709 { 00:25:08.709 "subsystem": "keyring", 00:25:08.709 "config": [ 00:25:08.709 { 00:25:08.709 "method": "keyring_file_add_key", 00:25:08.709 "params": { 00:25:08.710 "name": "key0", 00:25:08.710 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:08.710 } 00:25:08.710 } 00:25:08.710 ] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "iobuf", 00:25:08.710 "config": [ 00:25:08.710 { 00:25:08.710 "method": "iobuf_set_options", 00:25:08.710 "params": { 00:25:08.710 "small_pool_count": 8192, 00:25:08.710 "large_pool_count": 1024, 00:25:08.710 "small_bufsize": 8192, 00:25:08.710 "large_bufsize": 135168, 00:25:08.710 "enable_numa": false 00:25:08.710 } 00:25:08.710 } 00:25:08.710 ] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "sock", 00:25:08.710 "config": [ 00:25:08.710 { 00:25:08.710 "method": "sock_set_default_impl", 00:25:08.710 "params": { 00:25:08.710 "impl_name": "posix" 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "sock_impl_set_options", 00:25:08.710 "params": { 00:25:08.710 "impl_name": "ssl", 00:25:08.710 "recv_buf_size": 4096, 00:25:08.710 "send_buf_size": 4096, 00:25:08.710 "enable_recv_pipe": true, 00:25:08.710 "enable_quickack": false, 00:25:08.710 "enable_placement_id": 0, 00:25:08.710 "enable_zerocopy_send_server": true, 00:25:08.710 "enable_zerocopy_send_client": false, 00:25:08.710 "zerocopy_threshold": 0, 00:25:08.710 "tls_version": 0, 00:25:08.710 "enable_ktls": false 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "sock_impl_set_options", 00:25:08.710 "params": { 00:25:08.710 "impl_name": "posix", 00:25:08.710 "recv_buf_size": 2097152, 00:25:08.710 "send_buf_size": 2097152, 00:25:08.710 "enable_recv_pipe": true, 00:25:08.710 "enable_quickack": false, 00:25:08.710 "enable_placement_id": 0, 00:25:08.710 "enable_zerocopy_send_server": true, 00:25:08.710 "enable_zerocopy_send_client": false, 00:25:08.710 "zerocopy_threshold": 0, 00:25:08.710 "tls_version": 0, 00:25:08.710 "enable_ktls": false 00:25:08.710 } 00:25:08.710 } 00:25:08.710 ] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "vmd", 00:25:08.710 "config": [] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "accel", 00:25:08.710 "config": [ 00:25:08.710 { 00:25:08.710 "method": "accel_set_options", 00:25:08.710 "params": { 00:25:08.710 "small_cache_size": 128, 00:25:08.710 "large_cache_size": 16, 00:25:08.710 "task_count": 2048, 00:25:08.710 "sequence_count": 2048, 00:25:08.710 "buf_count": 2048 00:25:08.710 } 00:25:08.710 } 00:25:08.710 ] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "bdev", 00:25:08.710 "config": [ 00:25:08.710 { 00:25:08.710 "method": "bdev_set_options", 00:25:08.710 "params": { 00:25:08.710 "bdev_io_pool_size": 65535, 00:25:08.710 "bdev_io_cache_size": 256, 00:25:08.710 "bdev_auto_examine": true, 00:25:08.710 "iobuf_small_cache_size": 128, 00:25:08.710 "iobuf_large_cache_size": 16 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "bdev_raid_set_options", 00:25:08.710 "params": { 00:25:08.710 "process_window_size_kb": 1024, 00:25:08.710 "process_max_bandwidth_mb_sec": 0 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "bdev_iscsi_set_options", 00:25:08.710 "params": { 00:25:08.710 "timeout_sec": 30 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "bdev_nvme_set_options", 00:25:08.710 "params": { 00:25:08.710 "action_on_timeout": "none", 00:25:08.710 "timeout_us": 0, 00:25:08.710 "timeout_admin_us": 0, 00:25:08.710 "keep_alive_timeout_ms": 10000, 00:25:08.710 "arbitration_burst": 0, 00:25:08.710 "low_priority_weight": 0, 00:25:08.710 "medium_priority_weight": 0, 00:25:08.710 "high_priority_weight": 0, 00:25:08.710 "nvme_adminq_poll_period_us": 10000, 00:25:08.710 "nvme_ioq_poll_period_us": 0, 00:25:08.710 "io_queue_requests": 0, 00:25:08.710 "delay_cmd_submit": true, 00:25:08.710 "transport_retry_count": 4, 00:25:08.710 "bdev_retry_count": 3, 00:25:08.710 "transport_ack_timeout": 0, 00:25:08.710 "ctrlr_loss_timeout_sec": 0, 00:25:08.710 "reconnect_delay_sec": 0, 00:25:08.710 "fast_io_fail_timeout_sec": 0, 00:25:08.710 "disable_auto_failback": false, 00:25:08.710 "generate_uuids": false, 00:25:08.710 "transport_tos": 0, 00:25:08.710 "nvme_error_stat": false, 00:25:08.710 "rdma_srq_size": 0, 00:25:08.710 "io_path_stat": false, 00:25:08.710 "allow_accel_sequence": false, 00:25:08.710 "rdma_max_cq_size": 0, 00:25:08.710 "rdma_cm_event_timeout_ms": 0, 00:25:08.710 "dhchap_digests": [ 00:25:08.710 "sha256", 00:25:08.710 "sha384", 00:25:08.710 "sha512" 00:25:08.710 ], 00:25:08.710 "dhchap_dhgroups": [ 00:25:08.710 "null", 00:25:08.710 "ffdhe2048", 00:25:08.710 "ffdhe3072", 00:25:08.710 "ffdhe4096", 00:25:08.710 "ffdhe6144", 00:25:08.710 "ffdhe8192" 00:25:08.710 ] 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "bdev_nvme_set_hotplug", 00:25:08.710 "params": { 00:25:08.710 "period_us": 100000, 00:25:08.710 "enable": false 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "bdev_malloc_create", 00:25:08.710 "params": { 00:25:08.710 "name": "malloc0", 00:25:08.710 "num_blocks": 8192, 00:25:08.710 "block_size": 4096, 00:25:08.710 "physical_block_size": 4096, 00:25:08.710 "uuid": "aa215ef3-37d2-4cab-bfcf-6da66d56243d", 00:25:08.710 "optimal_io_boundary": 0, 00:25:08.710 "md_size": 0, 00:25:08.710 "dif_type": 0, 00:25:08.710 "dif_is_head_of_md": false, 00:25:08.710 "dif_pi_format": 0 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "bdev_wait_for_examine" 00:25:08.710 } 00:25:08.710 ] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "nbd", 00:25:08.710 "config": [] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "scheduler", 00:25:08.710 "config": [ 00:25:08.710 { 00:25:08.710 "method": "framework_set_scheduler", 00:25:08.710 "params": { 00:25:08.710 "name": "static" 00:25:08.710 } 00:25:08.710 } 00:25:08.710 ] 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "subsystem": "nvmf", 00:25:08.710 "config": [ 00:25:08.710 { 00:25:08.710 "method": "nvmf_set_config", 00:25:08.710 "params": { 00:25:08.710 "discovery_filter": "match_any", 00:25:08.710 "admin_cmd_passthru": { 00:25:08.710 "identify_ctrlr": false 00:25:08.710 }, 00:25:08.710 "dhchap_digests": [ 00:25:08.710 "sha256", 00:25:08.710 "sha384", 00:25:08.710 "sha512" 00:25:08.710 ], 00:25:08.710 "dhchap_dhgroups": [ 00:25:08.710 "null", 00:25:08.710 "ffdhe2048", 00:25:08.710 "ffdhe3072", 00:25:08.710 "ffdhe4096", 00:25:08.710 "ffdhe6144", 00:25:08.710 "ffdhe8192" 00:25:08.710 ] 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "nvmf_set_max_subsystems", 00:25:08.710 "params": { 00:25:08.710 "max_subsystems": 1024 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "nvmf_set_crdt", 00:25:08.710 "params": { 00:25:08.710 "crdt1": 0, 00:25:08.710 "crdt2": 0, 00:25:08.710 "crdt3": 0 00:25:08.710 } 00:25:08.710 }, 00:25:08.710 { 00:25:08.710 "method": "nvmf_create_transport", 00:25:08.710 "params": { 00:25:08.710 "trtype": "TCP", 00:25:08.710 "max_queue_depth": 128, 00:25:08.710 "max_io_qpairs_per_ctrlr": 127, 00:25:08.710 "in_capsule_data_size": 4096, 00:25:08.710 "max_io_size": 131072, 00:25:08.711 "io_unit_size": 131072, 00:25:08.711 "max_aq_depth": 128, 00:25:08.711 "num_shared_buffers": 511, 00:25:08.711 "buf_cache_size": 4294967295, 00:25:08.711 "dif_insert_or_strip": false, 00:25:08.711 "zcopy": false, 00:25:08.711 "c2h_success": false, 00:25:08.711 "sock_priority": 0, 00:25:08.711 "abort_timeout_sec": 1, 00:25:08.711 "ack_timeout": 0, 00:25:08.711 "data_wr_pool_size": 0 00:25:08.711 } 00:25:08.711 }, 00:25:08.711 { 00:25:08.711 "method": "nvmf_create_subsystem", 00:25:08.711 "params": { 00:25:08.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.711 "allow_any_host": false, 00:25:08.711 "serial_number": "SPDK00000000000001", 00:25:08.711 "model_number": "SPDK bdev Controller", 00:25:08.711 "max_namespaces": 10, 00:25:08.711 "min_cntlid": 1, 00:25:08.711 "max_cntlid": 65519, 00:25:08.711 "ana_reporting": false 00:25:08.711 } 00:25:08.711 }, 00:25:08.711 { 00:25:08.711 "method": "nvmf_subsystem_add_host", 00:25:08.711 "params": { 00:25:08.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.711 "host": "nqn.2016-06.io.spdk:host1", 00:25:08.711 "psk": "key0" 00:25:08.711 } 00:25:08.711 }, 00:25:08.711 { 00:25:08.711 "method": "nvmf_subsystem_add_ns", 00:25:08.711 "params": { 00:25:08.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.711 "namespace": { 00:25:08.711 "nsid": 1, 00:25:08.711 "bdev_name": "malloc0", 00:25:08.711 "nguid": "AA215EF337D24CABBFCF6DA66D56243D", 00:25:08.711 "uuid": "aa215ef3-37d2-4cab-bfcf-6da66d56243d", 00:25:08.711 "no_auto_visible": false 00:25:08.711 } 00:25:08.711 } 00:25:08.711 }, 00:25:08.711 { 00:25:08.711 "method": "nvmf_subsystem_add_listener", 00:25:08.711 "params": { 00:25:08.711 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:08.711 "listen_address": { 00:25:08.711 "trtype": "TCP", 00:25:08.711 "adrfam": "IPv4", 00:25:08.711 "traddr": "10.0.0.2", 00:25:08.711 "trsvcid": "4420" 00:25:08.711 }, 00:25:08.711 "secure_channel": true 00:25:08.711 } 00:25:08.711 } 00:25:08.711 ] 00:25:08.711 } 00:25:08.711 ] 00:25:08.711 }' 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1384340 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1384340 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1384340 ']' 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:08.711 12:08:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:08.711 [2024-12-05 12:08:33.728727] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:08.711 [2024-12-05 12:08:33.728787] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.971 [2024-12-05 12:08:33.818657] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:08.971 [2024-12-05 12:08:33.848138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.971 [2024-12-05 12:08:33.848165] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.971 [2024-12-05 12:08:33.848171] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.971 [2024-12-05 12:08:33.848176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.971 [2024-12-05 12:08:33.848180] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.971 [2024-12-05 12:08:33.848652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.230 [2024-12-05 12:08:34.042092] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.230 [2024-12-05 12:08:34.074110] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:09.230 [2024-12-05 12:08:34.074309] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.491 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:09.491 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:09.491 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:09.491 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:09.491 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1384497 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1384497 /var/tmp/bdevperf.sock 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1384497 ']' 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:09.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:09.752 12:08:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:25:09.752 "subsystems": [ 00:25:09.752 { 00:25:09.752 "subsystem": "keyring", 00:25:09.752 "config": [ 00:25:09.752 { 00:25:09.752 "method": "keyring_file_add_key", 00:25:09.752 "params": { 00:25:09.752 "name": "key0", 00:25:09.752 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:09.752 } 00:25:09.752 } 00:25:09.752 ] 00:25:09.752 }, 00:25:09.752 { 00:25:09.752 "subsystem": "iobuf", 00:25:09.752 "config": [ 00:25:09.752 { 00:25:09.752 "method": "iobuf_set_options", 00:25:09.752 "params": { 00:25:09.752 "small_pool_count": 8192, 00:25:09.752 "large_pool_count": 1024, 00:25:09.753 "small_bufsize": 8192, 00:25:09.753 "large_bufsize": 135168, 00:25:09.753 "enable_numa": false 00:25:09.753 } 00:25:09.753 } 00:25:09.753 ] 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "subsystem": "sock", 00:25:09.753 "config": [ 00:25:09.753 { 00:25:09.753 "method": "sock_set_default_impl", 00:25:09.753 "params": { 00:25:09.753 "impl_name": "posix" 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "sock_impl_set_options", 00:25:09.753 "params": { 00:25:09.753 "impl_name": "ssl", 00:25:09.753 "recv_buf_size": 4096, 00:25:09.753 "send_buf_size": 4096, 00:25:09.753 "enable_recv_pipe": true, 00:25:09.753 "enable_quickack": false, 00:25:09.753 "enable_placement_id": 0, 00:25:09.753 "enable_zerocopy_send_server": true, 00:25:09.753 "enable_zerocopy_send_client": false, 00:25:09.753 "zerocopy_threshold": 0, 00:25:09.753 "tls_version": 0, 00:25:09.753 "enable_ktls": false 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "sock_impl_set_options", 00:25:09.753 "params": { 00:25:09.753 "impl_name": "posix", 00:25:09.753 "recv_buf_size": 2097152, 00:25:09.753 "send_buf_size": 2097152, 00:25:09.753 "enable_recv_pipe": true, 00:25:09.753 "enable_quickack": false, 00:25:09.753 "enable_placement_id": 0, 00:25:09.753 "enable_zerocopy_send_server": true, 00:25:09.753 "enable_zerocopy_send_client": false, 00:25:09.753 "zerocopy_threshold": 0, 00:25:09.753 "tls_version": 0, 00:25:09.753 "enable_ktls": false 00:25:09.753 } 00:25:09.753 } 00:25:09.753 ] 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "subsystem": "vmd", 00:25:09.753 "config": [] 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "subsystem": "accel", 00:25:09.753 "config": [ 00:25:09.753 { 00:25:09.753 "method": "accel_set_options", 00:25:09.753 "params": { 00:25:09.753 "small_cache_size": 128, 00:25:09.753 "large_cache_size": 16, 00:25:09.753 "task_count": 2048, 00:25:09.753 "sequence_count": 2048, 00:25:09.753 "buf_count": 2048 00:25:09.753 } 00:25:09.753 } 00:25:09.753 ] 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "subsystem": "bdev", 00:25:09.753 "config": [ 00:25:09.753 { 00:25:09.753 "method": "bdev_set_options", 00:25:09.753 "params": { 00:25:09.753 "bdev_io_pool_size": 65535, 00:25:09.753 "bdev_io_cache_size": 256, 00:25:09.753 "bdev_auto_examine": true, 00:25:09.753 "iobuf_small_cache_size": 128, 00:25:09.753 "iobuf_large_cache_size": 16 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "bdev_raid_set_options", 00:25:09.753 "params": { 00:25:09.753 "process_window_size_kb": 1024, 00:25:09.753 "process_max_bandwidth_mb_sec": 0 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "bdev_iscsi_set_options", 00:25:09.753 "params": { 00:25:09.753 "timeout_sec": 30 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "bdev_nvme_set_options", 00:25:09.753 "params": { 00:25:09.753 "action_on_timeout": "none", 00:25:09.753 "timeout_us": 0, 00:25:09.753 "timeout_admin_us": 0, 00:25:09.753 "keep_alive_timeout_ms": 10000, 00:25:09.753 "arbitration_burst": 0, 00:25:09.753 "low_priority_weight": 0, 00:25:09.753 "medium_priority_weight": 0, 00:25:09.753 "high_priority_weight": 0, 00:25:09.753 "nvme_adminq_poll_period_us": 10000, 00:25:09.753 "nvme_ioq_poll_period_us": 0, 00:25:09.753 "io_queue_requests": 512, 00:25:09.753 "delay_cmd_submit": true, 00:25:09.753 "transport_retry_count": 4, 00:25:09.753 "bdev_retry_count": 3, 00:25:09.753 "transport_ack_timeout": 0, 00:25:09.753 "ctrlr_loss_timeout_sec": 0, 00:25:09.753 "reconnect_delay_sec": 0, 00:25:09.753 "fast_io_fail_timeout_sec": 0, 00:25:09.753 "disable_auto_failback": false, 00:25:09.753 "generate_uuids": false, 00:25:09.753 "transport_tos": 0, 00:25:09.753 "nvme_error_stat": false, 00:25:09.753 "rdma_srq_size": 0, 00:25:09.753 "io_path_stat": false, 00:25:09.753 "allow_accel_sequence": false, 00:25:09.753 "rdma_max_cq_size": 0, 00:25:09.753 "rdma_cm_event_timeout_ms": 0, 00:25:09.753 "dhchap_digests": [ 00:25:09.753 "sha256", 00:25:09.753 "sha384", 00:25:09.753 "sha512" 00:25:09.753 ], 00:25:09.753 "dhchap_dhgroups": [ 00:25:09.753 "null", 00:25:09.753 "ffdhe2048", 00:25:09.753 "ffdhe3072", 00:25:09.753 "ffdhe4096", 00:25:09.753 "ffdhe6144", 00:25:09.753 "ffdhe8192" 00:25:09.753 ] 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "bdev_nvme_attach_controller", 00:25:09.753 "params": { 00:25:09.753 "name": "TLSTEST", 00:25:09.753 "trtype": "TCP", 00:25:09.753 "adrfam": "IPv4", 00:25:09.753 "traddr": "10.0.0.2", 00:25:09.753 "trsvcid": "4420", 00:25:09.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:09.753 "prchk_reftag": false, 00:25:09.753 "prchk_guard": false, 00:25:09.753 "ctrlr_loss_timeout_sec": 0, 00:25:09.753 "reconnect_delay_sec": 0, 00:25:09.753 "fast_io_fail_timeout_sec": 0, 00:25:09.753 "psk": "key0", 00:25:09.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:09.753 "hdgst": false, 00:25:09.753 "ddgst": false, 00:25:09.753 "multipath": "multipath" 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "bdev_nvme_set_hotplug", 00:25:09.753 "params": { 00:25:09.753 "period_us": 100000, 00:25:09.753 "enable": false 00:25:09.753 } 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "method": "bdev_wait_for_examine" 00:25:09.753 } 00:25:09.753 ] 00:25:09.753 }, 00:25:09.753 { 00:25:09.753 "subsystem": "nbd", 00:25:09.753 "config": [] 00:25:09.753 } 00:25:09.753 ] 00:25:09.753 }' 00:25:09.753 [2024-12-05 12:08:34.607902] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:09.753 [2024-12-05 12:08:34.607952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1384497 ] 00:25:09.753 [2024-12-05 12:08:34.694159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.753 [2024-12-05 12:08:34.729078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.014 [2024-12-05 12:08:34.869275] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.586 12:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.586 12:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:10.586 12:08:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:10.586 Running I/O for 10 seconds... 00:25:12.467 5746.00 IOPS, 22.45 MiB/s [2024-12-05T11:08:38.899Z] 5612.00 IOPS, 21.92 MiB/s [2024-12-05T11:08:39.840Z] 5243.33 IOPS, 20.48 MiB/s [2024-12-05T11:08:40.782Z] 5024.00 IOPS, 19.62 MiB/s [2024-12-05T11:08:41.721Z] 5034.20 IOPS, 19.66 MiB/s [2024-12-05T11:08:42.660Z] 5007.67 IOPS, 19.56 MiB/s [2024-12-05T11:08:43.638Z] 5012.29 IOPS, 19.58 MiB/s [2024-12-05T11:08:44.578Z] 5031.25 IOPS, 19.65 MiB/s [2024-12-05T11:08:45.516Z] 5027.89 IOPS, 19.64 MiB/s [2024-12-05T11:08:45.776Z] 5030.70 IOPS, 19.65 MiB/s 00:25:20.727 Latency(us) 00:25:20.727 [2024-12-05T11:08:45.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.727 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:20.727 Verification LBA range: start 0x0 length 0x2000 00:25:20.727 TLSTESTn1 : 10.02 5033.76 19.66 0.00 0.00 25386.36 5652.48 47622.83 00:25:20.727 [2024-12-05T11:08:45.776Z] =================================================================================================================== 00:25:20.727 [2024-12-05T11:08:45.776Z] Total : 5033.76 19.66 0.00 0.00 25386.36 5652.48 47622.83 00:25:20.727 { 00:25:20.727 "results": [ 00:25:20.727 { 00:25:20.727 "job": "TLSTESTn1", 00:25:20.727 "core_mask": "0x4", 00:25:20.727 "workload": "verify", 00:25:20.727 "status": "finished", 00:25:20.727 "verify_range": { 00:25:20.727 "start": 0, 00:25:20.727 "length": 8192 00:25:20.727 }, 00:25:20.727 "queue_depth": 128, 00:25:20.727 "io_size": 4096, 00:25:20.727 "runtime": 10.019143, 00:25:20.727 "iops": 5033.763865831638, 00:25:20.727 "mibps": 19.663140100904837, 00:25:20.727 "io_failed": 0, 00:25:20.727 "io_timeout": 0, 00:25:20.727 "avg_latency_us": 25386.36028287795, 00:25:20.727 "min_latency_us": 5652.48, 00:25:20.727 "max_latency_us": 47622.82666666667 00:25:20.727 } 00:25:20.727 ], 00:25:20.727 "core_count": 1 00:25:20.727 } 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1384497 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1384497 ']' 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1384497 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1384497 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1384497' 00:25:20.727 killing process with pid 1384497 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1384497 00:25:20.727 Received shutdown signal, test time was about 10.000000 seconds 00:25:20.727 00:25:20.727 Latency(us) 00:25:20.727 [2024-12-05T11:08:45.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.727 [2024-12-05T11:08:45.776Z] =================================================================================================================== 00:25:20.727 [2024-12-05T11:08:45.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1384497 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1384340 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1384340 ']' 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1384340 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1384340 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1384340' 00:25:20.727 killing process with pid 1384340 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1384340 00:25:20.727 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1384340 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1386774 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1386774 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1386774 ']' 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.988 12:08:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:20.988 [2024-12-05 12:08:45.940043] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:20.988 [2024-12-05 12:08:45.940097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.988 [2024-12-05 12:08:46.031622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.248 [2024-12-05 12:08:46.066337] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.248 [2024-12-05 12:08:46.066372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.248 [2024-12-05 12:08:46.066381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.248 [2024-12-05 12:08:46.066387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.248 [2024-12-05 12:08:46.066394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.248 [2024-12-05 12:08:46.066970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Bhw1tF3Os6 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Bhw1tF3Os6 00:25:21.818 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:22.079 [2024-12-05 12:08:46.942666] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.079 12:08:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:22.340 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:22.340 [2024-12-05 12:08:47.343689] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:22.340 [2024-12-05 12:08:47.344022] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.340 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:22.601 malloc0 00:25:22.601 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:22.862 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:23.123 12:08:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1387200 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1387200 /var/tmp/bdevperf.sock 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1387200 ']' 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.383 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.383 12:08:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:23.383 [2024-12-05 12:08:48.233184] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:23.383 [2024-12-05 12:08:48.233257] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387200 ] 00:25:23.383 [2024-12-05 12:08:48.324010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.383 [2024-12-05 12:08:48.357553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.327 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.327 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:24.327 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:24.327 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:24.588 [2024-12-05 12:08:49.384518] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:24.588 nvme0n1 00:25:24.588 12:08:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:24.588 Running I/O for 1 seconds... 00:25:25.792 5448.00 IOPS, 21.28 MiB/s 00:25:25.792 Latency(us) 00:25:25.792 [2024-12-05T11:08:50.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.792 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:25.792 Verification LBA range: start 0x0 length 0x2000 00:25:25.792 nvme0n1 : 1.02 5468.51 21.36 0.00 0.00 23191.67 8137.39 74711.04 00:25:25.792 [2024-12-05T11:08:50.841Z] =================================================================================================================== 00:25:25.792 [2024-12-05T11:08:50.841Z] Total : 5468.51 21.36 0.00 0.00 23191.67 8137.39 74711.04 00:25:25.792 { 00:25:25.792 "results": [ 00:25:25.792 { 00:25:25.792 "job": "nvme0n1", 00:25:25.792 "core_mask": "0x2", 00:25:25.792 "workload": "verify", 00:25:25.792 "status": "finished", 00:25:25.792 "verify_range": { 00:25:25.792 "start": 0, 00:25:25.792 "length": 8192 00:25:25.792 }, 00:25:25.792 "queue_depth": 128, 00:25:25.792 "io_size": 4096, 00:25:25.792 "runtime": 1.019656, 00:25:25.792 "iops": 5468.510948790573, 00:25:25.792 "mibps": 21.361370893713175, 00:25:25.792 "io_failed": 0, 00:25:25.792 "io_timeout": 0, 00:25:25.792 "avg_latency_us": 23191.665614538495, 00:25:25.792 "min_latency_us": 8137.386666666666, 00:25:25.792 "max_latency_us": 74711.04 00:25:25.792 } 00:25:25.792 ], 00:25:25.792 "core_count": 1 00:25:25.792 } 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1387200 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1387200 ']' 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1387200 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1387200 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1387200' 00:25:25.792 killing process with pid 1387200 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1387200 00:25:25.792 Received shutdown signal, test time was about 1.000000 seconds 00:25:25.792 00:25:25.792 Latency(us) 00:25:25.792 [2024-12-05T11:08:50.841Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:25.792 [2024-12-05T11:08:50.841Z] =================================================================================================================== 00:25:25.792 [2024-12-05T11:08:50.841Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1387200 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1386774 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1386774 ']' 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1386774 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:25.792 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1386774 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1386774' 00:25:26.054 killing process with pid 1386774 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1386774 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1386774 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1387733 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1387733 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1387733 ']' 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.054 12:08:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.054 [2024-12-05 12:08:51.015079] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:26.054 [2024-12-05 12:08:51.015136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.315 [2024-12-05 12:08:51.107116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.315 [2024-12-05 12:08:51.139574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.315 [2024-12-05 12:08:51.139609] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.315 [2024-12-05 12:08:51.139615] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.315 [2024-12-05 12:08:51.139619] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.315 [2024-12-05 12:08:51.139623] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.315 [2024-12-05 12:08:51.140138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:26.888 [2024-12-05 12:08:51.866564] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:26.888 malloc0 00:25:26.888 [2024-12-05 12:08:51.892536] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:26.888 [2024-12-05 12:08:51.892735] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1387913 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1387913 /var/tmp/bdevperf.sock 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1387913 ']' 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:26.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.888 12:08:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:27.149 [2024-12-05 12:08:51.970727] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:27.149 [2024-12-05 12:08:51.970776] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1387913 ] 00:25:27.149 [2024-12-05 12:08:52.055221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.149 [2024-12-05 12:08:52.084599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.721 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.721 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:27.721 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Bhw1tF3Os6 00:25:27.981 12:08:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:28.242 [2024-12-05 12:08:53.081102] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:28.242 nvme0n1 00:25:28.242 12:08:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:28.242 Running I/O for 1 seconds... 00:25:29.628 5361.00 IOPS, 20.94 MiB/s 00:25:29.628 Latency(us) 00:25:29.628 [2024-12-05T11:08:54.677Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.628 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:29.628 Verification LBA range: start 0x0 length 0x2000 00:25:29.628 nvme0n1 : 1.01 5414.33 21.15 0.00 0.00 23494.35 5297.49 28835.84 00:25:29.628 [2024-12-05T11:08:54.677Z] =================================================================================================================== 00:25:29.628 [2024-12-05T11:08:54.677Z] Total : 5414.33 21.15 0.00 0.00 23494.35 5297.49 28835.84 00:25:29.628 { 00:25:29.628 "results": [ 00:25:29.628 { 00:25:29.628 "job": "nvme0n1", 00:25:29.628 "core_mask": "0x2", 00:25:29.628 "workload": "verify", 00:25:29.628 "status": "finished", 00:25:29.628 "verify_range": { 00:25:29.628 "start": 0, 00:25:29.628 "length": 8192 00:25:29.628 }, 00:25:29.628 "queue_depth": 128, 00:25:29.628 "io_size": 4096, 00:25:29.628 "runtime": 1.013792, 00:25:29.628 "iops": 5414.32562103469, 00:25:29.628 "mibps": 21.149709457166757, 00:25:29.628 "io_failed": 0, 00:25:29.628 "io_timeout": 0, 00:25:29.628 "avg_latency_us": 23494.35203983725, 00:25:29.628 "min_latency_us": 5297.493333333333, 00:25:29.628 "max_latency_us": 28835.84 00:25:29.628 } 00:25:29.628 ], 00:25:29.628 "core_count": 1 00:25:29.628 } 00:25:29.628 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:29.628 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.628 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:29.628 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.628 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:29.628 "subsystems": [ 00:25:29.628 { 00:25:29.628 "subsystem": "keyring", 00:25:29.628 "config": [ 00:25:29.628 { 00:25:29.628 "method": "keyring_file_add_key", 00:25:29.628 "params": { 00:25:29.628 "name": "key0", 00:25:29.628 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:29.628 } 00:25:29.628 } 00:25:29.628 ] 00:25:29.628 }, 00:25:29.628 { 00:25:29.628 "subsystem": "iobuf", 00:25:29.629 "config": [ 00:25:29.629 { 00:25:29.629 "method": "iobuf_set_options", 00:25:29.629 "params": { 00:25:29.629 "small_pool_count": 8192, 00:25:29.629 "large_pool_count": 1024, 00:25:29.629 "small_bufsize": 8192, 00:25:29.629 "large_bufsize": 135168, 00:25:29.629 "enable_numa": false 00:25:29.629 } 00:25:29.629 } 00:25:29.629 ] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "sock", 00:25:29.629 "config": [ 00:25:29.629 { 00:25:29.629 "method": "sock_set_default_impl", 00:25:29.629 "params": { 00:25:29.629 "impl_name": "posix" 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "sock_impl_set_options", 00:25:29.629 "params": { 00:25:29.629 "impl_name": "ssl", 00:25:29.629 "recv_buf_size": 4096, 00:25:29.629 "send_buf_size": 4096, 00:25:29.629 "enable_recv_pipe": true, 00:25:29.629 "enable_quickack": false, 00:25:29.629 "enable_placement_id": 0, 00:25:29.629 "enable_zerocopy_send_server": true, 00:25:29.629 "enable_zerocopy_send_client": false, 00:25:29.629 "zerocopy_threshold": 0, 00:25:29.629 "tls_version": 0, 00:25:29.629 "enable_ktls": false 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "sock_impl_set_options", 00:25:29.629 "params": { 00:25:29.629 "impl_name": "posix", 00:25:29.629 "recv_buf_size": 2097152, 00:25:29.629 "send_buf_size": 2097152, 00:25:29.629 "enable_recv_pipe": true, 00:25:29.629 "enable_quickack": false, 00:25:29.629 "enable_placement_id": 0, 00:25:29.629 "enable_zerocopy_send_server": true, 00:25:29.629 "enable_zerocopy_send_client": false, 00:25:29.629 "zerocopy_threshold": 0, 00:25:29.629 "tls_version": 0, 00:25:29.629 "enable_ktls": false 00:25:29.629 } 00:25:29.629 } 00:25:29.629 ] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "vmd", 00:25:29.629 "config": [] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "accel", 00:25:29.629 "config": [ 00:25:29.629 { 00:25:29.629 "method": "accel_set_options", 00:25:29.629 "params": { 00:25:29.629 "small_cache_size": 128, 00:25:29.629 "large_cache_size": 16, 00:25:29.629 "task_count": 2048, 00:25:29.629 "sequence_count": 2048, 00:25:29.629 "buf_count": 2048 00:25:29.629 } 00:25:29.629 } 00:25:29.629 ] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "bdev", 00:25:29.629 "config": [ 00:25:29.629 { 00:25:29.629 "method": "bdev_set_options", 00:25:29.629 "params": { 00:25:29.629 "bdev_io_pool_size": 65535, 00:25:29.629 "bdev_io_cache_size": 256, 00:25:29.629 "bdev_auto_examine": true, 00:25:29.629 "iobuf_small_cache_size": 128, 00:25:29.629 "iobuf_large_cache_size": 16 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "bdev_raid_set_options", 00:25:29.629 "params": { 00:25:29.629 "process_window_size_kb": 1024, 00:25:29.629 "process_max_bandwidth_mb_sec": 0 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "bdev_iscsi_set_options", 00:25:29.629 "params": { 00:25:29.629 "timeout_sec": 30 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "bdev_nvme_set_options", 00:25:29.629 "params": { 00:25:29.629 "action_on_timeout": "none", 00:25:29.629 "timeout_us": 0, 00:25:29.629 "timeout_admin_us": 0, 00:25:29.629 "keep_alive_timeout_ms": 10000, 00:25:29.629 "arbitration_burst": 0, 00:25:29.629 "low_priority_weight": 0, 00:25:29.629 "medium_priority_weight": 0, 00:25:29.629 "high_priority_weight": 0, 00:25:29.629 "nvme_adminq_poll_period_us": 10000, 00:25:29.629 "nvme_ioq_poll_period_us": 0, 00:25:29.629 "io_queue_requests": 0, 00:25:29.629 "delay_cmd_submit": true, 00:25:29.629 "transport_retry_count": 4, 00:25:29.629 "bdev_retry_count": 3, 00:25:29.629 "transport_ack_timeout": 0, 00:25:29.629 "ctrlr_loss_timeout_sec": 0, 00:25:29.629 "reconnect_delay_sec": 0, 00:25:29.629 "fast_io_fail_timeout_sec": 0, 00:25:29.629 "disable_auto_failback": false, 00:25:29.629 "generate_uuids": false, 00:25:29.629 "transport_tos": 0, 00:25:29.629 "nvme_error_stat": false, 00:25:29.629 "rdma_srq_size": 0, 00:25:29.629 "io_path_stat": false, 00:25:29.629 "allow_accel_sequence": false, 00:25:29.629 "rdma_max_cq_size": 0, 00:25:29.629 "rdma_cm_event_timeout_ms": 0, 00:25:29.629 "dhchap_digests": [ 00:25:29.629 "sha256", 00:25:29.629 "sha384", 00:25:29.629 "sha512" 00:25:29.629 ], 00:25:29.629 "dhchap_dhgroups": [ 00:25:29.629 "null", 00:25:29.629 "ffdhe2048", 00:25:29.629 "ffdhe3072", 00:25:29.629 "ffdhe4096", 00:25:29.629 "ffdhe6144", 00:25:29.629 "ffdhe8192" 00:25:29.629 ] 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "bdev_nvme_set_hotplug", 00:25:29.629 "params": { 00:25:29.629 "period_us": 100000, 00:25:29.629 "enable": false 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "bdev_malloc_create", 00:25:29.629 "params": { 00:25:29.629 "name": "malloc0", 00:25:29.629 "num_blocks": 8192, 00:25:29.629 "block_size": 4096, 00:25:29.629 "physical_block_size": 4096, 00:25:29.629 "uuid": "62b508da-2ec2-4c51-a877-bf2976480370", 00:25:29.629 "optimal_io_boundary": 0, 00:25:29.629 "md_size": 0, 00:25:29.629 "dif_type": 0, 00:25:29.629 "dif_is_head_of_md": false, 00:25:29.629 "dif_pi_format": 0 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "bdev_wait_for_examine" 00:25:29.629 } 00:25:29.629 ] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "nbd", 00:25:29.629 "config": [] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "scheduler", 00:25:29.629 "config": [ 00:25:29.629 { 00:25:29.629 "method": "framework_set_scheduler", 00:25:29.629 "params": { 00:25:29.629 "name": "static" 00:25:29.629 } 00:25:29.629 } 00:25:29.629 ] 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "subsystem": "nvmf", 00:25:29.629 "config": [ 00:25:29.629 { 00:25:29.629 "method": "nvmf_set_config", 00:25:29.629 "params": { 00:25:29.629 "discovery_filter": "match_any", 00:25:29.629 "admin_cmd_passthru": { 00:25:29.629 "identify_ctrlr": false 00:25:29.629 }, 00:25:29.629 "dhchap_digests": [ 00:25:29.629 "sha256", 00:25:29.629 "sha384", 00:25:29.629 "sha512" 00:25:29.629 ], 00:25:29.629 "dhchap_dhgroups": [ 00:25:29.629 "null", 00:25:29.629 "ffdhe2048", 00:25:29.629 "ffdhe3072", 00:25:29.629 "ffdhe4096", 00:25:29.629 "ffdhe6144", 00:25:29.629 "ffdhe8192" 00:25:29.629 ] 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "nvmf_set_max_subsystems", 00:25:29.629 "params": { 00:25:29.629 "max_subsystems": 1024 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "nvmf_set_crdt", 00:25:29.629 "params": { 00:25:29.629 "crdt1": 0, 00:25:29.629 "crdt2": 0, 00:25:29.629 "crdt3": 0 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "nvmf_create_transport", 00:25:29.629 "params": { 00:25:29.629 "trtype": "TCP", 00:25:29.629 "max_queue_depth": 128, 00:25:29.629 "max_io_qpairs_per_ctrlr": 127, 00:25:29.629 "in_capsule_data_size": 4096, 00:25:29.629 "max_io_size": 131072, 00:25:29.629 "io_unit_size": 131072, 00:25:29.629 "max_aq_depth": 128, 00:25:29.629 "num_shared_buffers": 511, 00:25:29.629 "buf_cache_size": 4294967295, 00:25:29.629 "dif_insert_or_strip": false, 00:25:29.629 "zcopy": false, 00:25:29.629 "c2h_success": false, 00:25:29.629 "sock_priority": 0, 00:25:29.629 "abort_timeout_sec": 1, 00:25:29.629 "ack_timeout": 0, 00:25:29.629 "data_wr_pool_size": 0 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "nvmf_create_subsystem", 00:25:29.629 "params": { 00:25:29.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.629 "allow_any_host": false, 00:25:29.629 "serial_number": "00000000000000000000", 00:25:29.629 "model_number": "SPDK bdev Controller", 00:25:29.629 "max_namespaces": 32, 00:25:29.629 "min_cntlid": 1, 00:25:29.629 "max_cntlid": 65519, 00:25:29.629 "ana_reporting": false 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "nvmf_subsystem_add_host", 00:25:29.629 "params": { 00:25:29.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.629 "host": "nqn.2016-06.io.spdk:host1", 00:25:29.629 "psk": "key0" 00:25:29.629 } 00:25:29.629 }, 00:25:29.629 { 00:25:29.629 "method": "nvmf_subsystem_add_ns", 00:25:29.629 "params": { 00:25:29.629 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.629 "namespace": { 00:25:29.629 "nsid": 1, 00:25:29.629 "bdev_name": "malloc0", 00:25:29.629 "nguid": "62B508DA2EC24C51A877BF2976480370", 00:25:29.629 "uuid": "62b508da-2ec2-4c51-a877-bf2976480370", 00:25:29.630 "no_auto_visible": false 00:25:29.630 } 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "nvmf_subsystem_add_listener", 00:25:29.630 "params": { 00:25:29.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.630 "listen_address": { 00:25:29.630 "trtype": "TCP", 00:25:29.630 "adrfam": "IPv4", 00:25:29.630 "traddr": "10.0.0.2", 00:25:29.630 "trsvcid": "4420" 00:25:29.630 }, 00:25:29.630 "secure_channel": false, 00:25:29.630 "sock_impl": "ssl" 00:25:29.630 } 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }' 00:25:29.630 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:29.630 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:29.630 "subsystems": [ 00:25:29.630 { 00:25:29.630 "subsystem": "keyring", 00:25:29.630 "config": [ 00:25:29.630 { 00:25:29.630 "method": "keyring_file_add_key", 00:25:29.630 "params": { 00:25:29.630 "name": "key0", 00:25:29.630 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:29.630 } 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "subsystem": "iobuf", 00:25:29.630 "config": [ 00:25:29.630 { 00:25:29.630 "method": "iobuf_set_options", 00:25:29.630 "params": { 00:25:29.630 "small_pool_count": 8192, 00:25:29.630 "large_pool_count": 1024, 00:25:29.630 "small_bufsize": 8192, 00:25:29.630 "large_bufsize": 135168, 00:25:29.630 "enable_numa": false 00:25:29.630 } 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "subsystem": "sock", 00:25:29.630 "config": [ 00:25:29.630 { 00:25:29.630 "method": "sock_set_default_impl", 00:25:29.630 "params": { 00:25:29.630 "impl_name": "posix" 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "sock_impl_set_options", 00:25:29.630 "params": { 00:25:29.630 "impl_name": "ssl", 00:25:29.630 "recv_buf_size": 4096, 00:25:29.630 "send_buf_size": 4096, 00:25:29.630 "enable_recv_pipe": true, 00:25:29.630 "enable_quickack": false, 00:25:29.630 "enable_placement_id": 0, 00:25:29.630 "enable_zerocopy_send_server": true, 00:25:29.630 "enable_zerocopy_send_client": false, 00:25:29.630 "zerocopy_threshold": 0, 00:25:29.630 "tls_version": 0, 00:25:29.630 "enable_ktls": false 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "sock_impl_set_options", 00:25:29.630 "params": { 00:25:29.630 "impl_name": "posix", 00:25:29.630 "recv_buf_size": 2097152, 00:25:29.630 "send_buf_size": 2097152, 00:25:29.630 "enable_recv_pipe": true, 00:25:29.630 "enable_quickack": false, 00:25:29.630 "enable_placement_id": 0, 00:25:29.630 "enable_zerocopy_send_server": true, 00:25:29.630 "enable_zerocopy_send_client": false, 00:25:29.630 "zerocopy_threshold": 0, 00:25:29.630 "tls_version": 0, 00:25:29.630 "enable_ktls": false 00:25:29.630 } 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "subsystem": "vmd", 00:25:29.630 "config": [] 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "subsystem": "accel", 00:25:29.630 "config": [ 00:25:29.630 { 00:25:29.630 "method": "accel_set_options", 00:25:29.630 "params": { 00:25:29.630 "small_cache_size": 128, 00:25:29.630 "large_cache_size": 16, 00:25:29.630 "task_count": 2048, 00:25:29.630 "sequence_count": 2048, 00:25:29.630 "buf_count": 2048 00:25:29.630 } 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "subsystem": "bdev", 00:25:29.630 "config": [ 00:25:29.630 { 00:25:29.630 "method": "bdev_set_options", 00:25:29.630 "params": { 00:25:29.630 "bdev_io_pool_size": 65535, 00:25:29.630 "bdev_io_cache_size": 256, 00:25:29.630 "bdev_auto_examine": true, 00:25:29.630 "iobuf_small_cache_size": 128, 00:25:29.630 "iobuf_large_cache_size": 16 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_raid_set_options", 00:25:29.630 "params": { 00:25:29.630 "process_window_size_kb": 1024, 00:25:29.630 "process_max_bandwidth_mb_sec": 0 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_iscsi_set_options", 00:25:29.630 "params": { 00:25:29.630 "timeout_sec": 30 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_nvme_set_options", 00:25:29.630 "params": { 00:25:29.630 "action_on_timeout": "none", 00:25:29.630 "timeout_us": 0, 00:25:29.630 "timeout_admin_us": 0, 00:25:29.630 "keep_alive_timeout_ms": 10000, 00:25:29.630 "arbitration_burst": 0, 00:25:29.630 "low_priority_weight": 0, 00:25:29.630 "medium_priority_weight": 0, 00:25:29.630 "high_priority_weight": 0, 00:25:29.630 "nvme_adminq_poll_period_us": 10000, 00:25:29.630 "nvme_ioq_poll_period_us": 0, 00:25:29.630 "io_queue_requests": 512, 00:25:29.630 "delay_cmd_submit": true, 00:25:29.630 "transport_retry_count": 4, 00:25:29.630 "bdev_retry_count": 3, 00:25:29.630 "transport_ack_timeout": 0, 00:25:29.630 "ctrlr_loss_timeout_sec": 0, 00:25:29.630 "reconnect_delay_sec": 0, 00:25:29.630 "fast_io_fail_timeout_sec": 0, 00:25:29.630 "disable_auto_failback": false, 00:25:29.630 "generate_uuids": false, 00:25:29.630 "transport_tos": 0, 00:25:29.630 "nvme_error_stat": false, 00:25:29.630 "rdma_srq_size": 0, 00:25:29.630 "io_path_stat": false, 00:25:29.630 "allow_accel_sequence": false, 00:25:29.630 "rdma_max_cq_size": 0, 00:25:29.630 "rdma_cm_event_timeout_ms": 0, 00:25:29.630 "dhchap_digests": [ 00:25:29.630 "sha256", 00:25:29.630 "sha384", 00:25:29.630 "sha512" 00:25:29.630 ], 00:25:29.630 "dhchap_dhgroups": [ 00:25:29.630 "null", 00:25:29.630 "ffdhe2048", 00:25:29.630 "ffdhe3072", 00:25:29.630 "ffdhe4096", 00:25:29.630 "ffdhe6144", 00:25:29.630 "ffdhe8192" 00:25:29.630 ] 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_nvme_attach_controller", 00:25:29.630 "params": { 00:25:29.630 "name": "nvme0", 00:25:29.630 "trtype": "TCP", 00:25:29.630 "adrfam": "IPv4", 00:25:29.630 "traddr": "10.0.0.2", 00:25:29.630 "trsvcid": "4420", 00:25:29.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:29.630 "prchk_reftag": false, 00:25:29.630 "prchk_guard": false, 00:25:29.630 "ctrlr_loss_timeout_sec": 0, 00:25:29.630 "reconnect_delay_sec": 0, 00:25:29.630 "fast_io_fail_timeout_sec": 0, 00:25:29.630 "psk": "key0", 00:25:29.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:29.630 "hdgst": false, 00:25:29.630 "ddgst": false, 00:25:29.630 "multipath": "multipath" 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_nvme_set_hotplug", 00:25:29.630 "params": { 00:25:29.630 "period_us": 100000, 00:25:29.630 "enable": false 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_enable_histogram", 00:25:29.630 "params": { 00:25:29.630 "name": "nvme0n1", 00:25:29.630 "enable": true 00:25:29.630 } 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "method": "bdev_wait_for_examine" 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }, 00:25:29.630 { 00:25:29.630 "subsystem": "nbd", 00:25:29.630 "config": [] 00:25:29.630 } 00:25:29.630 ] 00:25:29.630 }' 00:25:29.630 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1387913 00:25:29.630 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1387913 ']' 00:25:29.630 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1387913 00:25:29.631 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:29.631 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.631 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1387913 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1387913' 00:25:29.891 killing process with pid 1387913 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1387913 00:25:29.891 Received shutdown signal, test time was about 1.000000 seconds 00:25:29.891 00:25:29.891 Latency(us) 00:25:29.891 [2024-12-05T11:08:54.940Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:29.891 [2024-12-05T11:08:54.940Z] =================================================================================================================== 00:25:29.891 [2024-12-05T11:08:54.940Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1387913 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1387733 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1387733 ']' 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1387733 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1387733 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1387733' 00:25:29.891 killing process with pid 1387733 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1387733 00:25:29.891 12:08:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1387733 00:25:30.157 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:30.157 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:30.157 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:30.158 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:30.158 "subsystems": [ 00:25:30.158 { 00:25:30.158 "subsystem": "keyring", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "keyring_file_add_key", 00:25:30.158 "params": { 00:25:30.158 "name": "key0", 00:25:30.158 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:30.158 } 00:25:30.158 } 00:25:30.158 ] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "iobuf", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "iobuf_set_options", 00:25:30.158 "params": { 00:25:30.158 "small_pool_count": 8192, 00:25:30.158 "large_pool_count": 1024, 00:25:30.158 "small_bufsize": 8192, 00:25:30.158 "large_bufsize": 135168, 00:25:30.158 "enable_numa": false 00:25:30.158 } 00:25:30.158 } 00:25:30.158 ] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "sock", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "sock_set_default_impl", 00:25:30.158 "params": { 00:25:30.158 "impl_name": "posix" 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "sock_impl_set_options", 00:25:30.158 "params": { 00:25:30.158 "impl_name": "ssl", 00:25:30.158 "recv_buf_size": 4096, 00:25:30.158 "send_buf_size": 4096, 00:25:30.158 "enable_recv_pipe": true, 00:25:30.158 "enable_quickack": false, 00:25:30.158 "enable_placement_id": 0, 00:25:30.158 "enable_zerocopy_send_server": true, 00:25:30.158 "enable_zerocopy_send_client": false, 00:25:30.158 "zerocopy_threshold": 0, 00:25:30.158 "tls_version": 0, 00:25:30.158 "enable_ktls": false 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "sock_impl_set_options", 00:25:30.158 "params": { 00:25:30.158 "impl_name": "posix", 00:25:30.158 "recv_buf_size": 2097152, 00:25:30.158 "send_buf_size": 2097152, 00:25:30.158 "enable_recv_pipe": true, 00:25:30.158 "enable_quickack": false, 00:25:30.158 "enable_placement_id": 0, 00:25:30.158 "enable_zerocopy_send_server": true, 00:25:30.158 "enable_zerocopy_send_client": false, 00:25:30.158 "zerocopy_threshold": 0, 00:25:30.158 "tls_version": 0, 00:25:30.158 "enable_ktls": false 00:25:30.158 } 00:25:30.158 } 00:25:30.158 ] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "vmd", 00:25:30.158 "config": [] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "accel", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "accel_set_options", 00:25:30.158 "params": { 00:25:30.158 "small_cache_size": 128, 00:25:30.158 "large_cache_size": 16, 00:25:30.158 "task_count": 2048, 00:25:30.158 "sequence_count": 2048, 00:25:30.158 "buf_count": 2048 00:25:30.158 } 00:25:30.158 } 00:25:30.158 ] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "bdev", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "bdev_set_options", 00:25:30.158 "params": { 00:25:30.158 "bdev_io_pool_size": 65535, 00:25:30.158 "bdev_io_cache_size": 256, 00:25:30.158 "bdev_auto_examine": true, 00:25:30.158 "iobuf_small_cache_size": 128, 00:25:30.158 "iobuf_large_cache_size": 16 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "bdev_raid_set_options", 00:25:30.158 "params": { 00:25:30.158 "process_window_size_kb": 1024, 00:25:30.158 "process_max_bandwidth_mb_sec": 0 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "bdev_iscsi_set_options", 00:25:30.158 "params": { 00:25:30.158 "timeout_sec": 30 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "bdev_nvme_set_options", 00:25:30.158 "params": { 00:25:30.158 "action_on_timeout": "none", 00:25:30.158 "timeout_us": 0, 00:25:30.158 "timeout_admin_us": 0, 00:25:30.158 "keep_alive_timeout_ms": 10000, 00:25:30.158 "arbitration_burst": 0, 00:25:30.158 "low_priority_weight": 0, 00:25:30.158 "medium_priority_weight": 0, 00:25:30.158 "high_priority_weight": 0, 00:25:30.158 "nvme_adminq_poll_period_us": 10000, 00:25:30.158 "nvme_ioq_poll_period_us": 0, 00:25:30.158 "io_queue_requests": 0, 00:25:30.158 "delay_cmd_submit": true, 00:25:30.158 "transport_retry_count": 4, 00:25:30.158 "bdev_retry_count": 3, 00:25:30.158 "transport_ack_timeout": 0, 00:25:30.158 "ctrlr_loss_timeout_sec": 0, 00:25:30.158 "reconnect_delay_sec": 0, 00:25:30.158 "fast_io_fail_timeout_sec": 0, 00:25:30.158 "disable_auto_failback": false, 00:25:30.158 "generate_uuids": false, 00:25:30.158 "transport_tos": 0, 00:25:30.158 "nvme_error_stat": false, 00:25:30.158 "rdma_srq_size": 0, 00:25:30.158 "io_path_stat": false, 00:25:30.158 "allow_accel_sequence": false, 00:25:30.158 "rdma_max_cq_size": 0, 00:25:30.158 "rdma_cm_event_timeout_ms": 0, 00:25:30.158 "dhchap_digests": [ 00:25:30.158 "sha256", 00:25:30.158 "sha384", 00:25:30.158 "sha512" 00:25:30.158 ], 00:25:30.158 "dhchap_dhgroups": [ 00:25:30.158 "null", 00:25:30.158 "ffdhe2048", 00:25:30.158 "ffdhe3072", 00:25:30.158 "ffdhe4096", 00:25:30.158 "ffdhe6144", 00:25:30.158 "ffdhe8192" 00:25:30.158 ] 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "bdev_nvme_set_hotplug", 00:25:30.158 "params": { 00:25:30.158 "period_us": 100000, 00:25:30.158 "enable": false 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "bdev_malloc_create", 00:25:30.158 "params": { 00:25:30.158 "name": "malloc0", 00:25:30.158 "num_blocks": 8192, 00:25:30.158 "block_size": 4096, 00:25:30.158 "physical_block_size": 4096, 00:25:30.158 "uuid": "62b508da-2ec2-4c51-a877-bf2976480370", 00:25:30.158 "optimal_io_boundary": 0, 00:25:30.158 "md_size": 0, 00:25:30.158 "dif_type": 0, 00:25:30.158 "dif_is_head_of_md": false, 00:25:30.158 "dif_pi_format": 0 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "bdev_wait_for_examine" 00:25:30.158 } 00:25:30.158 ] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "nbd", 00:25:30.158 "config": [] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "scheduler", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "framework_set_scheduler", 00:25:30.158 "params": { 00:25:30.158 "name": "static" 00:25:30.158 } 00:25:30.158 } 00:25:30.158 ] 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "subsystem": "nvmf", 00:25:30.158 "config": [ 00:25:30.158 { 00:25:30.158 "method": "nvmf_set_config", 00:25:30.158 "params": { 00:25:30.158 "discovery_filter": "match_any", 00:25:30.158 "admin_cmd_passthru": { 00:25:30.158 "identify_ctrlr": false 00:25:30.158 }, 00:25:30.158 "dhchap_digests": [ 00:25:30.158 "sha256", 00:25:30.158 "sha384", 00:25:30.158 "sha512" 00:25:30.158 ], 00:25:30.158 "dhchap_dhgroups": [ 00:25:30.158 "null", 00:25:30.158 "ffdhe2048", 00:25:30.158 "ffdhe3072", 00:25:30.158 "ffdhe4096", 00:25:30.158 "ffdhe6144", 00:25:30.158 "ffdhe8192" 00:25:30.158 ] 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_set_max_subsystems", 00:25:30.158 "params": { 00:25:30.158 "max_subsystems": 1024 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_set_crdt", 00:25:30.158 "params": { 00:25:30.158 "crdt1": 0, 00:25:30.158 "crdt2": 0, 00:25:30.158 "crdt3": 0 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_create_transport", 00:25:30.158 "params": { 00:25:30.158 "trtype": "TCP", 00:25:30.158 "max_queue_depth": 128, 00:25:30.158 "max_io_qpairs_per_ctrlr": 127, 00:25:30.158 "in_capsule_data_size": 4096, 00:25:30.158 "max_io_size": 131072, 00:25:30.158 "io_unit_size": 131072, 00:25:30.158 "max_aq_depth": 128, 00:25:30.158 "num_shared_buffers": 511, 00:25:30.158 "buf_cache_size": 4294967295, 00:25:30.158 "dif_insert_or_strip": false, 00:25:30.158 "zcopy": false, 00:25:30.158 "c2h_success": false, 00:25:30.158 "sock_priority": 0, 00:25:30.158 "abort_timeout_sec": 1, 00:25:30.158 "ack_timeout": 0, 00:25:30.158 "data_wr_pool_size": 0 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_create_subsystem", 00:25:30.158 "params": { 00:25:30.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.158 "allow_any_host": false, 00:25:30.158 "serial_number": "00000000000000000000", 00:25:30.158 "model_number": "SPDK bdev Controller", 00:25:30.158 "max_namespaces": 32, 00:25:30.158 "min_cntlid": 1, 00:25:30.158 "max_cntlid": 65519, 00:25:30.158 "ana_reporting": false 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_subsystem_add_host", 00:25:30.158 "params": { 00:25:30.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.158 "host": "nqn.2016-06.io.spdk:host1", 00:25:30.158 "psk": "key0" 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_subsystem_add_ns", 00:25:30.158 "params": { 00:25:30.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.158 "namespace": { 00:25:30.158 "nsid": 1, 00:25:30.158 "bdev_name": "malloc0", 00:25:30.158 "nguid": "62B508DA2EC24C51A877BF2976480370", 00:25:30.158 "uuid": "62b508da-2ec2-4c51-a877-bf2976480370", 00:25:30.158 "no_auto_visible": false 00:25:30.158 } 00:25:30.158 } 00:25:30.158 }, 00:25:30.158 { 00:25:30.158 "method": "nvmf_subsystem_add_listener", 00:25:30.158 "params": { 00:25:30.158 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.158 "listen_address": { 00:25:30.158 "trtype": "TCP", 00:25:30.159 "adrfam": "IPv4", 00:25:30.159 "traddr": "10.0.0.2", 00:25:30.159 "trsvcid": "4420" 00:25:30.159 }, 00:25:30.159 "secure_channel": false, 00:25:30.159 "sock_impl": "ssl" 00:25:30.159 } 00:25:30.159 } 00:25:30.159 ] 00:25:30.159 } 00:25:30.159 ] 00:25:30.159 }' 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1388599 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1388599 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1388599 ']' 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:30.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:30.159 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:30.159 [2024-12-05 12:08:55.067935] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:30.159 [2024-12-05 12:08:55.067990] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:30.159 [2024-12-05 12:08:55.154872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:30.159 [2024-12-05 12:08:55.184114] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:30.159 [2024-12-05 12:08:55.184140] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:30.159 [2024-12-05 12:08:55.184146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:30.159 [2024-12-05 12:08:55.184151] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:30.159 [2024-12-05 12:08:55.184155] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:30.159 [2024-12-05 12:08:55.184625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:30.531 [2024-12-05 12:08:55.378540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:30.531 [2024-12-05 12:08:55.410571] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:30.531 [2024-12-05 12:08:55.410785] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:30.829 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:30.829 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:30.829 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:30.829 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:30.829 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1388662 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1388662 /var/tmp/bdevperf.sock 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1388662 ']' 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:31.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:31.119 12:08:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:31.119 "subsystems": [ 00:25:31.119 { 00:25:31.119 "subsystem": "keyring", 00:25:31.119 "config": [ 00:25:31.119 { 00:25:31.119 "method": "keyring_file_add_key", 00:25:31.119 "params": { 00:25:31.119 "name": "key0", 00:25:31.119 "path": "/tmp/tmp.Bhw1tF3Os6" 00:25:31.119 } 00:25:31.119 } 00:25:31.119 ] 00:25:31.119 }, 00:25:31.119 { 00:25:31.119 "subsystem": "iobuf", 00:25:31.119 "config": [ 00:25:31.119 { 00:25:31.119 "method": "iobuf_set_options", 00:25:31.119 "params": { 00:25:31.119 "small_pool_count": 8192, 00:25:31.119 "large_pool_count": 1024, 00:25:31.119 "small_bufsize": 8192, 00:25:31.119 "large_bufsize": 135168, 00:25:31.119 "enable_numa": false 00:25:31.119 } 00:25:31.119 } 00:25:31.119 ] 00:25:31.119 }, 00:25:31.119 { 00:25:31.119 "subsystem": "sock", 00:25:31.119 "config": [ 00:25:31.120 { 00:25:31.120 "method": "sock_set_default_impl", 00:25:31.120 "params": { 00:25:31.120 "impl_name": "posix" 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "sock_impl_set_options", 00:25:31.120 "params": { 00:25:31.120 "impl_name": "ssl", 00:25:31.120 "recv_buf_size": 4096, 00:25:31.120 "send_buf_size": 4096, 00:25:31.120 "enable_recv_pipe": true, 00:25:31.120 "enable_quickack": false, 00:25:31.120 "enable_placement_id": 0, 00:25:31.120 "enable_zerocopy_send_server": true, 00:25:31.120 "enable_zerocopy_send_client": false, 00:25:31.120 "zerocopy_threshold": 0, 00:25:31.120 "tls_version": 0, 00:25:31.120 "enable_ktls": false 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "sock_impl_set_options", 00:25:31.120 "params": { 00:25:31.120 "impl_name": "posix", 00:25:31.120 "recv_buf_size": 2097152, 00:25:31.120 "send_buf_size": 2097152, 00:25:31.120 "enable_recv_pipe": true, 00:25:31.120 "enable_quickack": false, 00:25:31.120 "enable_placement_id": 0, 00:25:31.120 "enable_zerocopy_send_server": true, 00:25:31.120 "enable_zerocopy_send_client": false, 00:25:31.120 "zerocopy_threshold": 0, 00:25:31.120 "tls_version": 0, 00:25:31.120 "enable_ktls": false 00:25:31.120 } 00:25:31.120 } 00:25:31.120 ] 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "subsystem": "vmd", 00:25:31.120 "config": [] 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "subsystem": "accel", 00:25:31.120 "config": [ 00:25:31.120 { 00:25:31.120 "method": "accel_set_options", 00:25:31.120 "params": { 00:25:31.120 "small_cache_size": 128, 00:25:31.120 "large_cache_size": 16, 00:25:31.120 "task_count": 2048, 00:25:31.120 "sequence_count": 2048, 00:25:31.120 "buf_count": 2048 00:25:31.120 } 00:25:31.120 } 00:25:31.120 ] 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "subsystem": "bdev", 00:25:31.120 "config": [ 00:25:31.120 { 00:25:31.120 "method": "bdev_set_options", 00:25:31.120 "params": { 00:25:31.120 "bdev_io_pool_size": 65535, 00:25:31.120 "bdev_io_cache_size": 256, 00:25:31.120 "bdev_auto_examine": true, 00:25:31.120 "iobuf_small_cache_size": 128, 00:25:31.120 "iobuf_large_cache_size": 16 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_raid_set_options", 00:25:31.120 "params": { 00:25:31.120 "process_window_size_kb": 1024, 00:25:31.120 "process_max_bandwidth_mb_sec": 0 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_iscsi_set_options", 00:25:31.120 "params": { 00:25:31.120 "timeout_sec": 30 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_nvme_set_options", 00:25:31.120 "params": { 00:25:31.120 "action_on_timeout": "none", 00:25:31.120 "timeout_us": 0, 00:25:31.120 "timeout_admin_us": 0, 00:25:31.120 "keep_alive_timeout_ms": 10000, 00:25:31.120 "arbitration_burst": 0, 00:25:31.120 "low_priority_weight": 0, 00:25:31.120 "medium_priority_weight": 0, 00:25:31.120 "high_priority_weight": 0, 00:25:31.120 "nvme_adminq_poll_period_us": 10000, 00:25:31.120 "nvme_ioq_poll_period_us": 0, 00:25:31.120 "io_queue_requests": 512, 00:25:31.120 "delay_cmd_submit": true, 00:25:31.120 "transport_retry_count": 4, 00:25:31.120 "bdev_retry_count": 3, 00:25:31.120 "transport_ack_timeout": 0, 00:25:31.120 "ctrlr_loss_timeout_sec": 0, 00:25:31.120 "reconnect_delay_sec": 0, 00:25:31.120 "fast_io_fail_timeout_sec": 0, 00:25:31.120 "disable_auto_failback": false, 00:25:31.120 "generate_uuids": false, 00:25:31.120 "transport_tos": 0, 00:25:31.120 "nvme_error_stat": false, 00:25:31.120 "rdma_srq_size": 0, 00:25:31.120 "io_path_stat": false, 00:25:31.120 "allow_accel_sequence": false, 00:25:31.120 "rdma_max_cq_size": 0, 00:25:31.120 "rdma_cm_event_timeout_ms": 0, 00:25:31.120 "dhchap_digests": [ 00:25:31.120 "sha256", 00:25:31.120 "sha384", 00:25:31.120 "sha512" 00:25:31.120 ], 00:25:31.120 "dhchap_dhgroups": [ 00:25:31.120 "null", 00:25:31.120 "ffdhe2048", 00:25:31.120 "ffdhe3072", 00:25:31.120 "ffdhe4096", 00:25:31.120 "ffdhe6144", 00:25:31.120 "ffdhe8192" 00:25:31.120 ] 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_nvme_attach_controller", 00:25:31.120 "params": { 00:25:31.120 "name": "nvme0", 00:25:31.120 "trtype": "TCP", 00:25:31.120 "adrfam": "IPv4", 00:25:31.120 "traddr": "10.0.0.2", 00:25:31.120 "trsvcid": "4420", 00:25:31.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.120 "prchk_reftag": false, 00:25:31.120 "prchk_guard": false, 00:25:31.120 "ctrlr_loss_timeout_sec": 0, 00:25:31.120 "reconnect_delay_sec": 0, 00:25:31.120 "fast_io_fail_timeout_sec": 0, 00:25:31.120 "psk": "key0", 00:25:31.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:31.120 "hdgst": false, 00:25:31.120 "ddgst": false, 00:25:31.120 "multipath": "multipath" 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_nvme_set_hotplug", 00:25:31.120 "params": { 00:25:31.120 "period_us": 100000, 00:25:31.120 "enable": false 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_enable_histogram", 00:25:31.120 "params": { 00:25:31.120 "name": "nvme0n1", 00:25:31.120 "enable": true 00:25:31.120 } 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "method": "bdev_wait_for_examine" 00:25:31.120 } 00:25:31.120 ] 00:25:31.120 }, 00:25:31.120 { 00:25:31.120 "subsystem": "nbd", 00:25:31.120 "config": [] 00:25:31.120 } 00:25:31.120 ] 00:25:31.120 }' 00:25:31.120 [2024-12-05 12:08:55.959191] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:31.120 [2024-12-05 12:08:55.959241] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1388662 ] 00:25:31.120 [2024-12-05 12:08:56.040868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.120 [2024-12-05 12:08:56.070430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.381 [2024-12-05 12:08:56.206097] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:31.962 12:08:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.962 12:08:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:31.962 12:08:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:31.962 12:08:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:31.962 12:08:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.962 12:08:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:31.962 Running I/O for 1 seconds... 00:25:33.349 5502.00 IOPS, 21.49 MiB/s 00:25:33.349 Latency(us) 00:25:33.349 [2024-12-05T11:08:58.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.349 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:33.349 Verification LBA range: start 0x0 length 0x2000 00:25:33.349 nvme0n1 : 1.01 5558.66 21.71 0.00 0.00 22894.79 5679.79 67283.63 00:25:33.349 [2024-12-05T11:08:58.398Z] =================================================================================================================== 00:25:33.349 [2024-12-05T11:08:58.398Z] Total : 5558.66 21.71 0.00 0.00 22894.79 5679.79 67283.63 00:25:33.349 { 00:25:33.349 "results": [ 00:25:33.349 { 00:25:33.349 "job": "nvme0n1", 00:25:33.349 "core_mask": "0x2", 00:25:33.349 "workload": "verify", 00:25:33.349 "status": "finished", 00:25:33.349 "verify_range": { 00:25:33.349 "start": 0, 00:25:33.349 "length": 8192 00:25:33.349 }, 00:25:33.349 "queue_depth": 128, 00:25:33.349 "io_size": 4096, 00:25:33.349 "runtime": 1.012834, 00:25:33.349 "iops": 5558.660155563498, 00:25:33.349 "mibps": 21.713516232669914, 00:25:33.349 "io_failed": 0, 00:25:33.349 "io_timeout": 0, 00:25:33.349 "avg_latency_us": 22894.792071047956, 00:25:33.349 "min_latency_us": 5679.786666666667, 00:25:33.349 "max_latency_us": 67283.62666666666 00:25:33.349 } 00:25:33.349 ], 00:25:33.349 "core_count": 1 00:25:33.349 } 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:33.349 nvmf_trace.0 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1388662 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1388662 ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1388662 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1388662 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1388662' 00:25:33.349 killing process with pid 1388662 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1388662 00:25:33.349 Received shutdown signal, test time was about 1.000000 seconds 00:25:33.349 00:25:33.349 Latency(us) 00:25:33.349 [2024-12-05T11:08:58.398Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:33.349 [2024-12-05T11:08:58.398Z] =================================================================================================================== 00:25:33.349 [2024-12-05T11:08:58.398Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1388662 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:33.349 rmmod nvme_tcp 00:25:33.349 rmmod nvme_fabrics 00:25:33.349 rmmod nvme_keyring 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 1388599 ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 1388599 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1388599 ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1388599 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:33.349 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1388599 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1388599' 00:25:33.609 killing process with pid 1388599 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1388599 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1388599 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:33.609 12:08:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.pTChl9p6I6 /tmp/tmp.u9xQdFdzck /tmp/tmp.Bhw1tF3Os6 00:25:36.154 00:25:36.154 real 1m26.835s 00:25:36.154 user 2m16.182s 00:25:36.154 sys 0m27.448s 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:36.154 ************************************ 00:25:36.154 END TEST nvmf_tls 00:25:36.154 ************************************ 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:36.154 ************************************ 00:25:36.154 START TEST nvmf_fips 00:25:36.154 ************************************ 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:36.154 * Looking for test storage... 00:25:36.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.154 --rc genhtml_branch_coverage=1 00:25:36.154 --rc genhtml_function_coverage=1 00:25:36.154 --rc genhtml_legend=1 00:25:36.154 --rc geninfo_all_blocks=1 00:25:36.154 --rc geninfo_unexecuted_blocks=1 00:25:36.154 00:25:36.154 ' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.154 --rc genhtml_branch_coverage=1 00:25:36.154 --rc genhtml_function_coverage=1 00:25:36.154 --rc genhtml_legend=1 00:25:36.154 --rc geninfo_all_blocks=1 00:25:36.154 --rc geninfo_unexecuted_blocks=1 00:25:36.154 00:25:36.154 ' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.154 --rc genhtml_branch_coverage=1 00:25:36.154 --rc genhtml_function_coverage=1 00:25:36.154 --rc genhtml_legend=1 00:25:36.154 --rc geninfo_all_blocks=1 00:25:36.154 --rc geninfo_unexecuted_blocks=1 00:25:36.154 00:25:36.154 ' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:36.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:36.154 --rc genhtml_branch_coverage=1 00:25:36.154 --rc genhtml_function_coverage=1 00:25:36.154 --rc genhtml_legend=1 00:25:36.154 --rc geninfo_all_blocks=1 00:25:36.154 --rc geninfo_unexecuted_blocks=1 00:25:36.154 00:25:36.154 ' 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.154 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:36.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:36.155 12:09:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:36.155 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:36.156 Error setting digest 00:25:36.156 40423515C07F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:36.156 40423515C07F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:25:36.156 12:09:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:25:44.297 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:44.298 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:44.298 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:44.298 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:44.298 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@247 -- # create_target_ns 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:44.298 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:44.299 10.0.0.1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:44.299 10.0.0.2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:44.299 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.299 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.655 ms 00:25:44.299 00:25:44.299 --- 10.0.0.1 ping statistics --- 00:25:44.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.299 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:44.299 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.299 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.305 ms 00:25:44.299 00:25:44.299 --- 10.0.0.2 ping statistics --- 00:25:44.299 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.299 rtt min/avg/max/mdev = 0.305/0.305/0.305/0.000 ms 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:44.299 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:25:44.300 ' 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=1393689 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 1393689 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1393689 ']' 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:44.300 12:09:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:44.300 [2024-12-05 12:09:08.844251] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:44.300 [2024-12-05 12:09:08.844320] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.300 [2024-12-05 12:09:08.942879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.300 [2024-12-05 12:09:08.994848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.300 [2024-12-05 12:09:08.994896] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.300 [2024-12-05 12:09:08.994905] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.300 [2024-12-05 12:09:08.994913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.300 [2024-12-05 12:09:08.994921] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.300 [2024-12-05 12:09:08.995703] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.9LJ 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.9LJ 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.9LJ 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.9LJ 00:25:44.872 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:44.872 [2024-12-05 12:09:09.867812] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.872 [2024-12-05 12:09:09.883809] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:44.872 [2024-12-05 12:09:09.884113] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.133 malloc0 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1394106 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1394106 /var/tmp/bdevperf.sock 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1394106 ']' 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:45.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:45.133 12:09:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:45.133 [2024-12-05 12:09:10.038613] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:25:45.133 [2024-12-05 12:09:10.038695] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1394106 ] 00:25:45.133 [2024-12-05 12:09:10.134859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:45.394 [2024-12-05 12:09:10.189230] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.966 12:09:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:45.966 12:09:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:45.966 12:09:10 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.9LJ 00:25:46.227 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:46.227 [2024-12-05 12:09:11.217268] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:46.487 TLSTESTn1 00:25:46.487 12:09:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:46.487 Running I/O for 10 seconds... 00:25:48.810 5799.00 IOPS, 22.65 MiB/s [2024-12-05T11:09:14.800Z] 5412.00 IOPS, 21.14 MiB/s [2024-12-05T11:09:15.738Z] 5516.33 IOPS, 21.55 MiB/s [2024-12-05T11:09:16.678Z] 5492.75 IOPS, 21.46 MiB/s [2024-12-05T11:09:17.619Z] 5662.00 IOPS, 22.12 MiB/s [2024-12-05T11:09:18.562Z] 5579.33 IOPS, 21.79 MiB/s [2024-12-05T11:09:19.504Z] 5425.71 IOPS, 21.19 MiB/s [2024-12-05T11:09:20.885Z] 5530.12 IOPS, 21.60 MiB/s [2024-12-05T11:09:21.454Z] 5536.22 IOPS, 21.63 MiB/s [2024-12-05T11:09:21.715Z] 5498.80 IOPS, 21.48 MiB/s 00:25:56.666 Latency(us) 00:25:56.666 [2024-12-05T11:09:21.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.666 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:56.666 Verification LBA range: start 0x0 length 0x2000 00:25:56.666 TLSTESTn1 : 10.01 5504.26 21.50 0.00 0.00 23219.21 5160.96 26651.31 00:25:56.666 [2024-12-05T11:09:21.715Z] =================================================================================================================== 00:25:56.666 [2024-12-05T11:09:21.715Z] Total : 5504.26 21.50 0.00 0.00 23219.21 5160.96 26651.31 00:25:56.666 { 00:25:56.666 "results": [ 00:25:56.666 { 00:25:56.666 "job": "TLSTESTn1", 00:25:56.666 "core_mask": "0x4", 00:25:56.666 "workload": "verify", 00:25:56.666 "status": "finished", 00:25:56.666 "verify_range": { 00:25:56.666 "start": 0, 00:25:56.666 "length": 8192 00:25:56.666 }, 00:25:56.666 "queue_depth": 128, 00:25:56.666 "io_size": 4096, 00:25:56.666 "runtime": 10.012969, 00:25:56.666 "iops": 5504.2615232305225, 00:25:56.666 "mibps": 21.50102157511923, 00:25:56.666 "io_failed": 0, 00:25:56.666 "io_timeout": 0, 00:25:56.666 "avg_latency_us": 23219.208608581004, 00:25:56.666 "min_latency_us": 5160.96, 00:25:56.666 "max_latency_us": 26651.306666666667 00:25:56.666 } 00:25:56.666 ], 00:25:56.666 "core_count": 1 00:25:56.666 } 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:56.666 nvmf_trace.0 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1394106 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1394106 ']' 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1394106 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1394106 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1394106' 00:25:56.666 killing process with pid 1394106 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1394106 00:25:56.666 Received shutdown signal, test time was about 10.000000 seconds 00:25:56.666 00:25:56.666 Latency(us) 00:25:56.666 [2024-12-05T11:09:21.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:56.666 [2024-12-05T11:09:21.715Z] =================================================================================================================== 00:25:56.666 [2024-12-05T11:09:21.715Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:56.666 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1394106 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:56.926 rmmod nvme_tcp 00:25:56.926 rmmod nvme_fabrics 00:25:56.926 rmmod nvme_keyring 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 1393689 ']' 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 1393689 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1393689 ']' 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1393689 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1393689 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1393689' 00:25:56.926 killing process with pid 1393689 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1393689 00:25:56.926 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1393689 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:57.186 12:09:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.9LJ 00:25:59.100 00:25:59.100 real 0m23.384s 00:25:59.100 user 0m25.133s 00:25:59.100 sys 0m9.688s 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:59.100 ************************************ 00:25:59.100 END TEST nvmf_fips 00:25:59.100 ************************************ 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.100 12:09:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:59.362 ************************************ 00:25:59.362 START TEST nvmf_control_msg_list 00:25:59.362 ************************************ 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:59.362 * Looking for test storage... 00:25:59.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:59.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.362 --rc genhtml_branch_coverage=1 00:25:59.362 --rc genhtml_function_coverage=1 00:25:59.362 --rc genhtml_legend=1 00:25:59.362 --rc geninfo_all_blocks=1 00:25:59.362 --rc geninfo_unexecuted_blocks=1 00:25:59.362 00:25:59.362 ' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:59.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.362 --rc genhtml_branch_coverage=1 00:25:59.362 --rc genhtml_function_coverage=1 00:25:59.362 --rc genhtml_legend=1 00:25:59.362 --rc geninfo_all_blocks=1 00:25:59.362 --rc geninfo_unexecuted_blocks=1 00:25:59.362 00:25:59.362 ' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:59.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.362 --rc genhtml_branch_coverage=1 00:25:59.362 --rc genhtml_function_coverage=1 00:25:59.362 --rc genhtml_legend=1 00:25:59.362 --rc geninfo_all_blocks=1 00:25:59.362 --rc geninfo_unexecuted_blocks=1 00:25:59.362 00:25:59.362 ' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:59.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.362 --rc genhtml_branch_coverage=1 00:25:59.362 --rc genhtml_function_coverage=1 00:25:59.362 --rc genhtml_legend=1 00:25:59.362 --rc geninfo_all_blocks=1 00:25:59.362 --rc geninfo_unexecuted_blocks=1 00:25:59.362 00:25:59.362 ' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.362 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:59.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:59.623 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:25:59.624 12:09:24 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:07.777 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:07.777 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:07.777 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:07.778 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:07.778 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@247 -- # create_target_ns 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:07.778 10.0.0.1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:07.778 10.0.0.2 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:07.778 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:07.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:07.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.704 ms 00:26:07.779 00:26:07.779 --- 10.0.0.1 ping statistics --- 00:26:07.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.779 rtt min/avg/max/mdev = 0.704/0.704/0.704/0.000 ms 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:07.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:07.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:26:07.779 00:26:07.779 --- 10.0.0.2 ping statistics --- 00:26:07.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:07.779 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:07.779 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:07.780 12:09:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:07.780 ' 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=1400976 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 1400976 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1400976 ']' 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.780 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:07.780 [2024-12-05 12:09:32.110575] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:07.780 [2024-12-05 12:09:32.110642] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.780 [2024-12-05 12:09:32.208828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:07.780 [2024-12-05 12:09:32.259670] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.780 [2024-12-05 12:09:32.259722] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.780 [2024-12-05 12:09:32.259731] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.780 [2024-12-05 12:09:32.259738] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.780 [2024-12-05 12:09:32.259745] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.780 [2024-12-05 12:09:32.260520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:08.041 [2024-12-05 12:09:32.975581] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.041 12:09:32 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:08.041 Malloc0 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.041 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:08.042 [2024-12-05 12:09:33.029997] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1401006 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1401007 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1401008 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1401006 00:26:08.042 12:09:33 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:08.302 [2024-12-05 12:09:33.130544] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:08.302 [2024-12-05 12:09:33.140659] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:08.302 [2024-12-05 12:09:33.141004] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:09.240 Initializing NVMe Controllers 00:26:09.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:09.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:26:09.240 Initialization complete. Launching workers. 00:26:09.240 ======================================================== 00:26:09.240 Latency(us) 00:26:09.240 Device Information : IOPS MiB/s Average min max 00:26:09.240 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 1496.00 5.84 668.17 298.45 897.32 00:26:09.240 ======================================================== 00:26:09.240 Total : 1496.00 5.84 668.17 298.45 897.32 00:26:09.240 00:26:09.240 Initializing NVMe Controllers 00:26:09.240 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:09.240 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:26:09.240 Initialization complete. Launching workers. 00:26:09.240 ======================================================== 00:26:09.240 Latency(us) 00:26:09.240 Device Information : IOPS MiB/s Average min max 00:26:09.240 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40894.63 40741.28 41066.72 00:26:09.240 ======================================================== 00:26:09.240 Total : 25.00 0.10 40894.63 40741.28 41066.72 00:26:09.240 00:26:09.500 Initializing NVMe Controllers 00:26:09.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:09.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:26:09.500 Initialization complete. Launching workers. 00:26:09.500 ======================================================== 00:26:09.500 Latency(us) 00:26:09.500 Device Information : IOPS MiB/s Average min max 00:26:09.500 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1530.00 5.98 653.71 290.89 963.34 00:26:09.500 ======================================================== 00:26:09.500 Total : 1530.00 5.98 653.71 290.89 963.34 00:26:09.500 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1401007 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1401008 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:09.500 rmmod nvme_tcp 00:26:09.500 rmmod nvme_fabrics 00:26:09.500 rmmod nvme_keyring 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 1400976 ']' 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 1400976 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1400976 ']' 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1400976 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1400976 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1400976' 00:26:09.500 killing process with pid 1400976 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1400976 00:26:09.500 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1400976 00:26:09.761 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:09.761 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:26:09.762 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:26:09.762 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:09.762 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:09.762 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:09.762 12:09:34 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:26:11.671 00:26:11.671 real 0m12.537s 00:26:11.671 user 0m7.989s 00:26:11.671 sys 0m6.656s 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:11.671 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:26:11.671 ************************************ 00:26:11.672 END TEST nvmf_control_msg_list 00:26:11.672 ************************************ 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:11.931 ************************************ 00:26:11.931 START TEST nvmf_wait_for_buf 00:26:11.931 ************************************ 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:26:11.931 * Looking for test storage... 00:26:11.931 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:26:11.931 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:12.192 12:09:36 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:12.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.192 --rc genhtml_branch_coverage=1 00:26:12.192 --rc genhtml_function_coverage=1 00:26:12.192 --rc genhtml_legend=1 00:26:12.192 --rc geninfo_all_blocks=1 00:26:12.192 --rc geninfo_unexecuted_blocks=1 00:26:12.192 00:26:12.192 ' 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:12.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.192 --rc genhtml_branch_coverage=1 00:26:12.192 --rc genhtml_function_coverage=1 00:26:12.192 --rc genhtml_legend=1 00:26:12.192 --rc geninfo_all_blocks=1 00:26:12.192 --rc geninfo_unexecuted_blocks=1 00:26:12.192 00:26:12.192 ' 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:12.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.192 --rc genhtml_branch_coverage=1 00:26:12.192 --rc genhtml_function_coverage=1 00:26:12.192 --rc genhtml_legend=1 00:26:12.192 --rc geninfo_all_blocks=1 00:26:12.192 --rc geninfo_unexecuted_blocks=1 00:26:12.192 00:26:12.192 ' 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:12.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:12.192 --rc genhtml_branch_coverage=1 00:26:12.192 --rc genhtml_function_coverage=1 00:26:12.192 --rc genhtml_legend=1 00:26:12.192 --rc geninfo_all_blocks=1 00:26:12.192 --rc geninfo_unexecuted_blocks=1 00:26:12.192 00:26:12.192 ' 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:12.192 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:12.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:26:12.193 12:09:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.329 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:20.330 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:20.330 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:20.330 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:20.330 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@247 -- # create_target_ns 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:20.330 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:20.331 10.0.0.1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:20.331 10.0.0.2 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:20.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:20.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.632 ms 00:26:20.331 00:26:20.331 --- 10.0.0.1 ping statistics --- 00:26:20.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.331 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:20.331 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:20.332 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:20.332 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:26:20.332 00:26:20.332 --- 10.0.0.2 ping statistics --- 00:26:20.332 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:20.332 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:20.332 ' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=1405684 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 1405684 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1405684 ']' 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:20.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.332 12:09:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.332 [2024-12-05 12:09:44.756252] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:20.332 [2024-12-05 12:09:44.756318] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.332 [2024-12-05 12:09:44.857150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.332 [2024-12-05 12:09:44.907014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.332 [2024-12-05 12:09:44.907067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.332 [2024-12-05 12:09:44.907076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:20.333 [2024-12-05 12:09:44.907083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:20.333 [2024-12-05 12:09:44.907089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.333 [2024-12-05 12:09:44.907859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.594 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.595 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:26:20.595 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.595 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 Malloc0 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 [2024-12-05 12:09:45.746314] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:20.854 [2024-12-05 12:09:45.782652] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:20.854 12:09:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:20.854 [2024-12-05 12:09:45.885559] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:26:22.236 Initializing NVMe Controllers 00:26:22.236 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:26:22.236 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:26:22.236 Initialization complete. Launching workers. 00:26:22.236 ======================================================== 00:26:22.236 Latency(us) 00:26:22.236 Device Information : IOPS MiB/s Average min max 00:26:22.236 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 114.00 14.25 36544.60 8015.50 71836.04 00:26:22.236 ======================================================== 00:26:22.236 Total : 114.00 14.25 36544.60 8015.50 71836.04 00:26:22.236 00:26:22.497 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1798 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1798 -eq 0 ]] 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:22.498 rmmod nvme_tcp 00:26:22.498 rmmod nvme_fabrics 00:26:22.498 rmmod nvme_keyring 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 1405684 ']' 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 1405684 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1405684 ']' 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1405684 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1405684 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1405684' 00:26:22.498 killing process with pid 1405684 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1405684 00:26:22.498 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1405684 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:22.758 12:09:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:24.668 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:26:24.929 00:26:24.929 real 0m12.939s 00:26:24.929 user 0m5.223s 00:26:24.929 sys 0m6.311s 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:26:24.929 ************************************ 00:26:24.929 END TEST nvmf_wait_for_buf 00:26:24.929 ************************************ 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:26:24.929 12:09:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.072 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:33.073 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:33.073 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:33.073 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:33.073 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:33.073 ************************************ 00:26:33.073 START TEST nvmf_perf_adq 00:26:33.073 ************************************ 00:26:33.073 12:09:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:33.073 * Looking for test storage... 00:26:33.073 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.073 --rc genhtml_branch_coverage=1 00:26:33.073 --rc genhtml_function_coverage=1 00:26:33.073 --rc genhtml_legend=1 00:26:33.073 --rc geninfo_all_blocks=1 00:26:33.073 --rc geninfo_unexecuted_blocks=1 00:26:33.073 00:26:33.073 ' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.073 --rc genhtml_branch_coverage=1 00:26:33.073 --rc genhtml_function_coverage=1 00:26:33.073 --rc genhtml_legend=1 00:26:33.073 --rc geninfo_all_blocks=1 00:26:33.073 --rc geninfo_unexecuted_blocks=1 00:26:33.073 00:26:33.073 ' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.073 --rc genhtml_branch_coverage=1 00:26:33.073 --rc genhtml_function_coverage=1 00:26:33.073 --rc genhtml_legend=1 00:26:33.073 --rc geninfo_all_blocks=1 00:26:33.073 --rc geninfo_unexecuted_blocks=1 00:26:33.073 00:26:33.073 ' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:33.073 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:33.073 --rc genhtml_branch_coverage=1 00:26:33.073 --rc genhtml_function_coverage=1 00:26:33.073 --rc genhtml_legend=1 00:26:33.073 --rc geninfo_all_blocks=1 00:26:33.073 --rc geninfo_unexecuted_blocks=1 00:26:33.073 00:26:33.073 ' 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.073 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:33.074 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:26:33.074 12:09:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:39.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:39.740 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:39.740 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:39.740 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:26:39.740 12:10:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:26:41.125 12:10:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:26:43.039 12:10:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:48.334 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:48.334 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:48.334 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:48.335 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:48.335 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:48.335 10.0.0.1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:48.335 10.0.0.2 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:48.335 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:48.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.633 ms 00:26:48.336 00:26:48.336 --- 10.0.0.1 ping statistics --- 00:26:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.336 rtt min/avg/max/mdev = 0.633/0.633/0.633/0.000 ms 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:48.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:26:48.336 00:26:48.336 --- 10.0.0.2 ping statistics --- 00:26:48.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.336 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:48.336 12:10:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.336 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:26:48.337 ' 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=1415750 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 1415750 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1415750 ']' 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:48.337 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:48.337 [2024-12-05 12:10:13.133618] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:26:48.337 [2024-12-05 12:10:13.133686] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.337 [2024-12-05 12:10:13.234511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:48.337 [2024-12-05 12:10:13.289494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.337 [2024-12-05 12:10:13.289548] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.337 [2024-12-05 12:10:13.289561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.337 [2024-12-05 12:10:13.289568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.337 [2024-12-05 12:10:13.289574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.337 [2024-12-05 12:10:13.291598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.337 [2024-12-05 12:10:13.291801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:48.337 [2024-12-05 12:10:13.291963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:48.337 [2024-12-05 12:10:13.291964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.280 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.280 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:26:49.280 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:49.280 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:49.280 12:10:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:49.280 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.281 [2024-12-05 12:10:14.165012] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.281 Malloc1 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:49.281 [2024-12-05 12:10:14.238236] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1415990 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:26:49.281 12:10:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:26:51.825 "tick_rate": 2400000000, 00:26:51.825 "poll_groups": [ 00:26:51.825 { 00:26:51.825 "name": "nvmf_tgt_poll_group_000", 00:26:51.825 "admin_qpairs": 1, 00:26:51.825 "io_qpairs": 1, 00:26:51.825 "current_admin_qpairs": 1, 00:26:51.825 "current_io_qpairs": 1, 00:26:51.825 "pending_bdev_io": 0, 00:26:51.825 "completed_nvme_io": 15786, 00:26:51.825 "transports": [ 00:26:51.825 { 00:26:51.825 "trtype": "TCP" 00:26:51.825 } 00:26:51.825 ] 00:26:51.825 }, 00:26:51.825 { 00:26:51.825 "name": "nvmf_tgt_poll_group_001", 00:26:51.825 "admin_qpairs": 0, 00:26:51.825 "io_qpairs": 1, 00:26:51.825 "current_admin_qpairs": 0, 00:26:51.825 "current_io_qpairs": 1, 00:26:51.825 "pending_bdev_io": 0, 00:26:51.825 "completed_nvme_io": 17435, 00:26:51.825 "transports": [ 00:26:51.825 { 00:26:51.825 "trtype": "TCP" 00:26:51.825 } 00:26:51.825 ] 00:26:51.825 }, 00:26:51.825 { 00:26:51.825 "name": "nvmf_tgt_poll_group_002", 00:26:51.825 "admin_qpairs": 0, 00:26:51.825 "io_qpairs": 1, 00:26:51.825 "current_admin_qpairs": 0, 00:26:51.825 "current_io_qpairs": 1, 00:26:51.825 "pending_bdev_io": 0, 00:26:51.825 "completed_nvme_io": 17970, 00:26:51.825 "transports": [ 00:26:51.825 { 00:26:51.825 "trtype": "TCP" 00:26:51.825 } 00:26:51.825 ] 00:26:51.825 }, 00:26:51.825 { 00:26:51.825 "name": "nvmf_tgt_poll_group_003", 00:26:51.825 "admin_qpairs": 0, 00:26:51.825 "io_qpairs": 1, 00:26:51.825 "current_admin_qpairs": 0, 00:26:51.825 "current_io_qpairs": 1, 00:26:51.825 "pending_bdev_io": 0, 00:26:51.825 "completed_nvme_io": 15937, 00:26:51.825 "transports": [ 00:26:51.825 { 00:26:51.825 "trtype": "TCP" 00:26:51.825 } 00:26:51.825 ] 00:26:51.825 } 00:26:51.825 ] 00:26:51.825 }' 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:26:51.825 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:26:51.826 12:10:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1415990 00:26:59.959 Initializing NVMe Controllers 00:26:59.959 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:59.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:59.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:59.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:59.959 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:59.959 Initialization complete. Launching workers. 00:26:59.959 ======================================================== 00:26:59.959 Latency(us) 00:26:59.959 Device Information : IOPS MiB/s Average min max 00:26:59.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 12304.90 48.07 5201.83 1023.87 13194.60 00:26:59.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13239.40 51.72 4834.16 1244.33 13393.97 00:26:59.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13591.20 53.09 4708.72 954.82 12438.54 00:26:59.959 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12671.50 49.50 5051.26 1056.50 13205.39 00:26:59.959 ======================================================== 00:26:59.959 Total : 51806.99 202.37 4941.68 954.82 13393.97 00:26:59.959 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:59.959 rmmod nvme_tcp 00:26:59.959 rmmod nvme_fabrics 00:26:59.959 rmmod nvme_keyring 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 1415750 ']' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 1415750 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1415750 ']' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1415750 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1415750 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1415750' 00:26:59.959 killing process with pid 1415750 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1415750 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1415750 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:59.959 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:01.869 12:10:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:03.786 12:10:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:05.178 12:10:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:10.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:10.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:10.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:10.461 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:10.461 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:10.462 10.0.0.1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:10.462 10.0.0.2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:10.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:10.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.695 ms 00:27:10.462 00:27:10.462 --- 10.0.0.1 ping statistics --- 00:27:10.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.462 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:10.462 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:10.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:10.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:27:10.722 00:27:10.722 --- 10.0.0.2 ping statistics --- 00:27:10.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:10.722 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:10.722 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:10.723 ' 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:10.723 net.core.busy_poll = 1 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:10.723 net.core.busy_read = 1 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:10.723 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=1420505 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 1420505 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1420505 ']' 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:10.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.983 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:10.983 [2024-12-05 12:10:35.949050] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:10.983 [2024-12-05 12:10:35.949118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:11.244 [2024-12-05 12:10:36.049658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:11.244 [2024-12-05 12:10:36.102602] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:11.244 [2024-12-05 12:10:36.102652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:11.244 [2024-12-05 12:10:36.102661] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:11.244 [2024-12-05 12:10:36.102669] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:11.244 [2024-12-05 12:10:36.102674] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:11.244 [2024-12-05 12:10:36.104771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:11.244 [2024-12-05 12:10:36.104931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:11.244 [2024-12-05 12:10:36.105091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:11.244 [2024-12-05 12:10:36.105091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:11.815 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:11.815 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:11.816 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.076 [2024-12-05 12:10:36.962661] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.076 12:10:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.076 Malloc1 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:12.076 [2024-12-05 12:10:37.046483] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1420847 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:27:12.076 12:10:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:27:14.621 "tick_rate": 2400000000, 00:27:14.621 "poll_groups": [ 00:27:14.621 { 00:27:14.621 "name": "nvmf_tgt_poll_group_000", 00:27:14.621 "admin_qpairs": 1, 00:27:14.621 "io_qpairs": 0, 00:27:14.621 "current_admin_qpairs": 1, 00:27:14.621 "current_io_qpairs": 0, 00:27:14.621 "pending_bdev_io": 0, 00:27:14.621 "completed_nvme_io": 0, 00:27:14.621 "transports": [ 00:27:14.621 { 00:27:14.621 "trtype": "TCP" 00:27:14.621 } 00:27:14.621 ] 00:27:14.621 }, 00:27:14.621 { 00:27:14.621 "name": "nvmf_tgt_poll_group_001", 00:27:14.621 "admin_qpairs": 0, 00:27:14.621 "io_qpairs": 4, 00:27:14.621 "current_admin_qpairs": 0, 00:27:14.621 "current_io_qpairs": 4, 00:27:14.621 "pending_bdev_io": 0, 00:27:14.621 "completed_nvme_io": 34560, 00:27:14.621 "transports": [ 00:27:14.621 { 00:27:14.621 "trtype": "TCP" 00:27:14.621 } 00:27:14.621 ] 00:27:14.621 }, 00:27:14.621 { 00:27:14.621 "name": "nvmf_tgt_poll_group_002", 00:27:14.621 "admin_qpairs": 0, 00:27:14.621 "io_qpairs": 0, 00:27:14.621 "current_admin_qpairs": 0, 00:27:14.621 "current_io_qpairs": 0, 00:27:14.621 "pending_bdev_io": 0, 00:27:14.621 "completed_nvme_io": 0, 00:27:14.621 "transports": [ 00:27:14.621 { 00:27:14.621 "trtype": "TCP" 00:27:14.621 } 00:27:14.621 ] 00:27:14.621 }, 00:27:14.621 { 00:27:14.621 "name": "nvmf_tgt_poll_group_003", 00:27:14.621 "admin_qpairs": 0, 00:27:14.621 "io_qpairs": 0, 00:27:14.621 "current_admin_qpairs": 0, 00:27:14.621 "current_io_qpairs": 0, 00:27:14.621 "pending_bdev_io": 0, 00:27:14.621 "completed_nvme_io": 0, 00:27:14.621 "transports": [ 00:27:14.621 { 00:27:14.621 "trtype": "TCP" 00:27:14.621 } 00:27:14.621 ] 00:27:14.621 } 00:27:14.621 ] 00:27:14.621 }' 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:27:14.621 12:10:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1420847 00:27:22.764 Initializing NVMe Controllers 00:27:22.764 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:22.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:22.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:22.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:22.764 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:22.764 Initialization complete. Launching workers. 00:27:22.764 ======================================================== 00:27:22.764 Latency(us) 00:27:22.764 Device Information : IOPS MiB/s Average min max 00:27:22.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6280.50 24.53 10200.03 1062.06 58297.38 00:27:22.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5888.50 23.00 10895.04 1383.33 60186.35 00:27:22.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5915.20 23.11 10852.78 1384.27 57651.26 00:27:22.764 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6132.10 23.95 10456.20 988.13 56923.81 00:27:22.764 ======================================================== 00:27:22.764 Total : 24216.30 94.59 10593.34 988.13 60186.35 00:27:22.764 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:22.764 rmmod nvme_tcp 00:27:22.764 rmmod nvme_fabrics 00:27:22.764 rmmod nvme_keyring 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 1420505 ']' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 1420505 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1420505 ']' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1420505 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1420505 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1420505' 00:27:22.764 killing process with pid 1420505 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1420505 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1420505 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:22.764 12:10:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:27:26.068 00:27:26.068 real 0m53.603s 00:27:26.068 user 2m49.837s 00:27:26.068 sys 0m11.810s 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.068 ************************************ 00:27:26.068 END TEST nvmf_perf_adq 00:27:26.068 ************************************ 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:26.068 ************************************ 00:27:26.068 START TEST nvmf_shutdown 00:27:26.068 ************************************ 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:26.068 * Looking for test storage... 00:27:26.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:26.068 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:26.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.069 --rc genhtml_branch_coverage=1 00:27:26.069 --rc genhtml_function_coverage=1 00:27:26.069 --rc genhtml_legend=1 00:27:26.069 --rc geninfo_all_blocks=1 00:27:26.069 --rc geninfo_unexecuted_blocks=1 00:27:26.069 00:27:26.069 ' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:26.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.069 --rc genhtml_branch_coverage=1 00:27:26.069 --rc genhtml_function_coverage=1 00:27:26.069 --rc genhtml_legend=1 00:27:26.069 --rc geninfo_all_blocks=1 00:27:26.069 --rc geninfo_unexecuted_blocks=1 00:27:26.069 00:27:26.069 ' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:26.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.069 --rc genhtml_branch_coverage=1 00:27:26.069 --rc genhtml_function_coverage=1 00:27:26.069 --rc genhtml_legend=1 00:27:26.069 --rc geninfo_all_blocks=1 00:27:26.069 --rc geninfo_unexecuted_blocks=1 00:27:26.069 00:27:26.069 ' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:26.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:26.069 --rc genhtml_branch_coverage=1 00:27:26.069 --rc genhtml_function_coverage=1 00:27:26.069 --rc genhtml_legend=1 00:27:26.069 --rc geninfo_all_blocks=1 00:27:26.069 --rc geninfo_unexecuted_blocks=1 00:27:26.069 00:27:26.069 ' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:26.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:26.069 ************************************ 00:27:26.069 START TEST nvmf_shutdown_tc1 00:27:26.069 ************************************ 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:26.069 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:27:26.070 12:10:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:34.218 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:34.219 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:34.219 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:34.219 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:34.219 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@247 -- # create_target_ns 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:34.219 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:34.220 10.0.0.1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:34.220 10.0.0.2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:34.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:34.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.613 ms 00:27:34.220 00:27:34.220 --- 10.0.0.1 ping statistics --- 00:27:34.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.220 rtt min/avg/max/mdev = 0.613/0.613/0.613/0.000 ms 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:34.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:34.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:27:34.220 00:27:34.220 --- 10.0.0.2 ping statistics --- 00:27:34.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:34.220 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:27:34.220 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:34.221 ' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=1427331 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 1427331 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1427331 ']' 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:34.221 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.221 [2024-12-05 12:10:58.739615] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:34.221 [2024-12-05 12:10:58.739684] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.221 [2024-12-05 12:10:58.841557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:34.221 [2024-12-05 12:10:58.893321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.221 [2024-12-05 12:10:58.893377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.221 [2024-12-05 12:10:58.893386] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.221 [2024-12-05 12:10:58.893394] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.221 [2024-12-05 12:10:58.893400] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.221 [2024-12-05 12:10:58.895434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:34.221 [2024-12-05 12:10:58.895594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:34.221 [2024-12-05 12:10:58.895843] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:34.221 [2024-12-05 12:10:58.895847] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.804 [2024-12-05 12:10:59.619224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.804 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:34.805 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:34.805 Malloc1 00:27:34.805 [2024-12-05 12:10:59.752441] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:34.805 Malloc2 00:27:34.805 Malloc3 00:27:35.064 Malloc4 00:27:35.064 Malloc5 00:27:35.064 Malloc6 00:27:35.064 Malloc7 00:27:35.064 Malloc8 00:27:35.064 Malloc9 00:27:35.323 Malloc10 00:27:35.323 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:35.323 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:35.323 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:35.323 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.323 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1427709 00:27:35.323 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1427709 /var/tmp/bdevperf.sock 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1427709 ']' 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:35.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 [2024-12-05 12:11:00.276483] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:35.324 [2024-12-05 12:11:00.276556] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:35.324 { 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme$subsystem", 00:27:35.324 "trtype": "$TEST_TRANSPORT", 00:27:35.324 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "$NVMF_PORT", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:35.324 "hdgst": ${hdgst:-false}, 00:27:35.324 "ddgst": ${ddgst:-false} 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 } 00:27:35.324 EOF 00:27:35.324 )") 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:27:35.324 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme1", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme2", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme3", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme4", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme5", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme6", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme7", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme8", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:35.324 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:35.324 "hdgst": false, 00:27:35.324 "ddgst": false 00:27:35.324 }, 00:27:35.324 "method": "bdev_nvme_attach_controller" 00:27:35.324 },{ 00:27:35.324 "params": { 00:27:35.324 "name": "Nvme9", 00:27:35.324 "trtype": "tcp", 00:27:35.324 "traddr": "10.0.0.2", 00:27:35.324 "adrfam": "ipv4", 00:27:35.324 "trsvcid": "4420", 00:27:35.324 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:35.325 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:35.325 "hdgst": false, 00:27:35.325 "ddgst": false 00:27:35.325 }, 00:27:35.325 "method": "bdev_nvme_attach_controller" 00:27:35.325 },{ 00:27:35.325 "params": { 00:27:35.325 "name": "Nvme10", 00:27:35.325 "trtype": "tcp", 00:27:35.325 "traddr": "10.0.0.2", 00:27:35.325 "adrfam": "ipv4", 00:27:35.325 "trsvcid": "4420", 00:27:35.325 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:35.325 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:35.325 "hdgst": false, 00:27:35.325 "ddgst": false 00:27:35.325 }, 00:27:35.325 "method": "bdev_nvme_attach_controller" 00:27:35.325 }' 00:27:35.325 [2024-12-05 12:11:00.372389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.583 [2024-12-05 12:11:00.425153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1427709 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:27:36.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1427709 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:36.962 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:27:37.902 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1427331 00:27:37.902 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 [2024-12-05 12:11:02.739101] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:37.903 [2024-12-05 12:11:02.739152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1428159 ] 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.903 "method": "bdev_nvme_attach_controller" 00:27:37.903 } 00:27:37.903 EOF 00:27:37.903 )") 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:37.903 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:37.903 { 00:27:37.903 "params": { 00:27:37.903 "name": "Nvme$subsystem", 00:27:37.903 "trtype": "$TEST_TRANSPORT", 00:27:37.903 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:37.903 "adrfam": "ipv4", 00:27:37.903 "trsvcid": "$NVMF_PORT", 00:27:37.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:37.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:37.903 "hdgst": ${hdgst:-false}, 00:27:37.903 "ddgst": ${ddgst:-false} 00:27:37.903 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 } 00:27:37.904 EOF 00:27:37.904 )") 00:27:37.904 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:27:37.904 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:27:37.904 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:27:37.904 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme1", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme2", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme3", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme4", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme5", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme6", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme7", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme8", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme9", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 },{ 00:27:37.904 "params": { 00:27:37.904 "name": "Nvme10", 00:27:37.904 "trtype": "tcp", 00:27:37.904 "traddr": "10.0.0.2", 00:27:37.904 "adrfam": "ipv4", 00:27:37.904 "trsvcid": "4420", 00:27:37.904 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:37.904 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:37.904 "hdgst": false, 00:27:37.904 "ddgst": false 00:27:37.904 }, 00:27:37.904 "method": "bdev_nvme_attach_controller" 00:27:37.904 }' 00:27:37.904 [2024-12-05 12:11:02.825972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.904 [2024-12-05 12:11:02.862647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.286 Running I/O for 1 seconds... 00:27:40.483 1797.00 IOPS, 112.31 MiB/s 00:27:40.483 Latency(us) 00:27:40.483 [2024-12-05T11:11:05.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:40.483 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.483 Verification LBA range: start 0x0 length 0x400 00:27:40.483 Nvme1n1 : 1.17 219.60 13.73 0.00 0.00 288544.64 16056.32 256901.12 00:27:40.483 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.483 Verification LBA range: start 0x0 length 0x400 00:27:40.483 Nvme2n1 : 1.07 238.41 14.90 0.00 0.00 260848.43 17803.95 249910.61 00:27:40.483 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.483 Verification LBA range: start 0x0 length 0x400 00:27:40.483 Nvme3n1 : 1.16 221.60 13.85 0.00 0.00 275406.08 32112.64 228939.09 00:27:40.483 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.483 Verification LBA range: start 0x0 length 0x400 00:27:40.483 Nvme4n1 : 1.16 220.54 13.78 0.00 0.00 272798.93 24139.09 234181.97 00:27:40.483 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.483 Verification LBA range: start 0x0 length 0x400 00:27:40.483 Nvme5n1 : 1.20 265.98 16.62 0.00 0.00 222691.33 17694.72 244667.73 00:27:40.483 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.483 Verification LBA range: start 0x0 length 0x400 00:27:40.484 Nvme6n1 : 1.20 267.08 16.69 0.00 0.00 217827.33 15510.19 249910.61 00:27:40.484 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.484 Verification LBA range: start 0x0 length 0x400 00:27:40.484 Nvme7n1 : 1.13 227.19 14.20 0.00 0.00 249754.03 26651.31 251658.24 00:27:40.484 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.484 Verification LBA range: start 0x0 length 0x400 00:27:40.484 Nvme8n1 : 1.21 265.32 16.58 0.00 0.00 211653.03 13107.20 260396.37 00:27:40.484 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.484 Verification LBA range: start 0x0 length 0x400 00:27:40.484 Nvme9n1 : 1.19 214.98 13.44 0.00 0.00 255415.89 17913.17 272629.76 00:27:40.484 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:40.484 Verification LBA range: start 0x0 length 0x400 00:27:40.484 Nvme10n1 : 1.22 262.47 16.40 0.00 0.00 206666.33 11741.87 265639.25 00:27:40.484 [2024-12-05T11:11:05.533Z] =================================================================================================================== 00:27:40.484 [2024-12-05T11:11:05.533Z] Total : 2403.19 150.20 0.00 0.00 243301.41 11741.87 272629.76 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:40.484 rmmod nvme_tcp 00:27:40.484 rmmod nvme_fabrics 00:27:40.484 rmmod nvme_keyring 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 1427331 ']' 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 1427331 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1427331 ']' 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1427331 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:40.484 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1427331 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1427331' 00:27:40.744 killing process with pid 1427331 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1427331 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1427331 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@254 -- # local dev 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:40.744 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:43.290 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:43.290 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:43.290 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # return 0 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@274 -- # iptr 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-save 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-restore 00:27:43.291 00:27:43.291 real 0m16.923s 00:27:43.291 user 0m33.516s 00:27:43.291 sys 0m7.053s 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.291 ************************************ 00:27:43.291 END TEST nvmf_shutdown_tc1 00:27:43.291 ************************************ 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:43.291 ************************************ 00:27:43.291 START TEST nvmf_shutdown_tc2 00:27:43.291 ************************************ 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:43.291 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:43.291 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:43.291 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:43.292 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:43.292 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@247 -- # create_target_ns 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:43.292 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:43.292 10.0.0.1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:43.292 10.0.0.2 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:43.292 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:43.293 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:43.293 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:43.293 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.604 ms 00:27:43.293 00:27:43.293 --- 10.0.0.1 ping statistics --- 00:27:43.293 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.293 rtt min/avg/max/mdev = 0.604/0.604/0.604/0.000 ms 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:43.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:43.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:27:43.555 00:27:43.555 --- 10.0.0.2 ping statistics --- 00:27:43.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:43.555 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target1 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:43.555 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:43.556 ' 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1429524 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1429524 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1429524 ']' 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:43.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:43.556 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:43.556 [2024-12-05 12:11:08.535752] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:43.556 [2024-12-05 12:11:08.535818] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:43.816 [2024-12-05 12:11:08.633076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:43.816 [2024-12-05 12:11:08.673333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:43.816 [2024-12-05 12:11:08.673371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:43.816 [2024-12-05 12:11:08.673381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:43.816 [2024-12-05 12:11:08.673386] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:43.816 [2024-12-05 12:11:08.673390] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:43.816 [2024-12-05 12:11:08.674931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:43.816 [2024-12-05 12:11:08.675089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:43.816 [2024-12-05 12:11:08.675246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.816 [2024-12-05 12:11:08.675248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.409 [2024-12-05 12:11:09.396952] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.409 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.724 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:44.724 Malloc1 00:27:44.725 [2024-12-05 12:11:09.504186] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.725 Malloc2 00:27:44.725 Malloc3 00:27:44.725 Malloc4 00:27:44.725 Malloc5 00:27:44.725 Malloc6 00:27:44.725 Malloc7 00:27:44.725 Malloc8 00:27:45.051 Malloc9 00:27:45.051 Malloc10 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1429769 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1429769 /var/tmp/bdevperf.sock 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1429769 ']' 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:45.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.051 { 00:27:45.051 "params": { 00:27:45.051 "name": "Nvme$subsystem", 00:27:45.051 "trtype": "$TEST_TRANSPORT", 00:27:45.051 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.051 "adrfam": "ipv4", 00:27:45.051 "trsvcid": "$NVMF_PORT", 00:27:45.051 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.051 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.051 "hdgst": ${hdgst:-false}, 00:27:45.051 "ddgst": ${ddgst:-false} 00:27:45.051 }, 00:27:45.051 "method": "bdev_nvme_attach_controller" 00:27:45.051 } 00:27:45.051 EOF 00:27:45.051 )") 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.051 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.051 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 [2024-12-05 12:11:09.949621] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:45.052 [2024-12-05 12:11:09.949676] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1429769 ] 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:45.052 { 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme$subsystem", 00:27:45.052 "trtype": "$TEST_TRANSPORT", 00:27:45.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "$NVMF_PORT", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:45.052 "hdgst": ${hdgst:-false}, 00:27:45.052 "ddgst": ${ddgst:-false} 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 } 00:27:45.052 EOF 00:27:45.052 )") 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:27:45.052 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme1", 00:27:45.052 "trtype": "tcp", 00:27:45.052 "traddr": "10.0.0.2", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "4420", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:45.052 "hdgst": false, 00:27:45.052 "ddgst": false 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 },{ 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme2", 00:27:45.052 "trtype": "tcp", 00:27:45.052 "traddr": "10.0.0.2", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "4420", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:45.052 "hdgst": false, 00:27:45.052 "ddgst": false 00:27:45.052 }, 00:27:45.052 "method": "bdev_nvme_attach_controller" 00:27:45.052 },{ 00:27:45.052 "params": { 00:27:45.052 "name": "Nvme3", 00:27:45.052 "trtype": "tcp", 00:27:45.052 "traddr": "10.0.0.2", 00:27:45.052 "adrfam": "ipv4", 00:27:45.052 "trsvcid": "4420", 00:27:45.052 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:45.052 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:45.052 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme4", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme5", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme6", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme7", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme8", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme9", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 },{ 00:27:45.053 "params": { 00:27:45.053 "name": "Nvme10", 00:27:45.053 "trtype": "tcp", 00:27:45.053 "traddr": "10.0.0.2", 00:27:45.053 "adrfam": "ipv4", 00:27:45.053 "trsvcid": "4420", 00:27:45.053 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:45.053 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:45.053 "hdgst": false, 00:27:45.053 "ddgst": false 00:27:45.053 }, 00:27:45.053 "method": "bdev_nvme_attach_controller" 00:27:45.053 }' 00:27:45.053 [2024-12-05 12:11:10.039212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:45.053 [2024-12-05 12:11:10.076635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:46.438 Running I/O for 10 seconds... 00:27:46.438 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:46.438 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:27:46.438 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:46.438 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.438 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:27:46.698 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:27:46.958 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:27:47.219 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:27:47.219 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:47.219 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:47.219 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:47.219 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.219 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1429769 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1429769 ']' 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1429769 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1429769 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1429769' 00:27:47.479 killing process with pid 1429769 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1429769 00:27:47.479 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1429769 00:27:47.479 Received shutdown signal, test time was about 0.982803 seconds 00:27:47.479 00:27:47.479 Latency(us) 00:27:47.479 [2024-12-05T11:11:12.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.479 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.479 Verification LBA range: start 0x0 length 0x400 00:27:47.479 Nvme1n1 : 0.95 208.42 13.03 0.00 0.00 301677.78 2607.79 265639.25 00:27:47.479 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.479 Verification LBA range: start 0x0 length 0x400 00:27:47.479 Nvme2n1 : 0.95 201.97 12.62 0.00 0.00 306311.40 18459.31 234181.97 00:27:47.479 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.479 Verification LBA range: start 0x0 length 0x400 00:27:47.479 Nvme3n1 : 0.97 263.72 16.48 0.00 0.00 230071.04 17913.17 248162.99 00:27:47.479 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.479 Verification LBA range: start 0x0 length 0x400 00:27:47.479 Nvme4n1 : 0.97 264.71 16.54 0.00 0.00 224339.41 22609.92 232434.35 00:27:47.479 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.479 Verification LBA range: start 0x0 length 0x400 00:27:47.479 Nvme5n1 : 0.98 262.49 16.41 0.00 0.00 221511.68 18896.21 225443.84 00:27:47.480 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.480 Verification LBA range: start 0x0 length 0x400 00:27:47.480 Nvme6n1 : 0.97 263.39 16.46 0.00 0.00 215227.20 13325.65 281367.89 00:27:47.480 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.480 Verification LBA range: start 0x0 length 0x400 00:27:47.480 Nvme7n1 : 0.98 260.71 16.29 0.00 0.00 213486.29 16274.77 267386.88 00:27:47.480 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.480 Verification LBA range: start 0x0 length 0x400 00:27:47.480 Nvme8n1 : 0.98 261.77 16.36 0.00 0.00 207517.23 19005.44 227191.47 00:27:47.480 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.480 Verification LBA range: start 0x0 length 0x400 00:27:47.480 Nvme9n1 : 0.96 200.02 12.50 0.00 0.00 264629.48 13817.17 267386.88 00:27:47.480 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:47.480 Verification LBA range: start 0x0 length 0x400 00:27:47.480 Nvme10n1 : 0.96 199.48 12.47 0.00 0.00 258963.34 22282.24 260396.37 00:27:47.480 [2024-12-05T11:11:12.529Z] =================================================================================================================== 00:27:47.480 [2024-12-05T11:11:12.529Z] Total : 2386.68 149.17 0.00 0.00 240226.62 2607.79 281367.89 00:27:47.740 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1429524 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:48.682 rmmod nvme_tcp 00:27:48.682 rmmod nvme_fabrics 00:27:48.682 rmmod nvme_keyring 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 1429524 ']' 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 1429524 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1429524 ']' 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1429524 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1429524 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1429524' 00:27:48.682 killing process with pid 1429524 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1429524 00:27:48.682 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1429524 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@254 -- # local dev 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:48.943 12:11:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # return 0 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:51.491 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@274 -- # iptr 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-save 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-restore 00:27:51.491 00:27:51.491 real 0m8.056s 00:27:51.491 user 0m23.918s 00:27:51.491 sys 0m1.373s 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:51.491 ************************************ 00:27:51.491 END TEST nvmf_shutdown_tc2 00:27:51.491 ************************************ 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:51.491 ************************************ 00:27:51.491 START TEST nvmf_shutdown_tc3 00:27:51.491 ************************************ 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:51.491 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:51.491 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:51.492 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:51.492 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:51.492 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@247 -- # create_target_ns 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:51.492 10.0.0.1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:51.492 10.0.0.2 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:51.492 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:51.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.577 ms 00:27:51.493 00:27:51.493 --- 10.0.0.1 ping statistics --- 00:27:51.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.493 rtt min/avg/max/mdev = 0.577/0.577/0.577/0.000 ms 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:51.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:27:51.493 00:27:51.493 --- 10.0.0.2 ping statistics --- 00:27:51.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.493 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:51.493 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:51.494 ' 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:51.494 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=1431110 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 1431110 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1431110 ']' 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.756 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:51.756 [2024-12-05 12:11:16.653576] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:51.756 [2024-12-05 12:11:16.653637] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.756 [2024-12-05 12:11:16.726071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.756 [2024-12-05 12:11:16.757903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.756 [2024-12-05 12:11:16.757931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.756 [2024-12-05 12:11:16.757937] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.756 [2024-12-05 12:11:16.757941] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.756 [2024-12-05 12:11:16.757945] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.756 [2024-12-05 12:11:16.759194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.756 [2024-12-05 12:11:16.759348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.756 [2024-12-05 12:11:16.759492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.756 [2024-12-05 12:11:16.759494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.698 [2024-12-05 12:11:17.489097] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.698 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.698 Malloc1 00:27:52.698 [2024-12-05 12:11:17.596213] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.698 Malloc2 00:27:52.698 Malloc3 00:27:52.698 Malloc4 00:27:52.698 Malloc5 00:27:52.958 Malloc6 00:27:52.958 Malloc7 00:27:52.958 Malloc8 00:27:52.958 Malloc9 00:27:52.958 Malloc10 00:27:52.958 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.958 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:27:52.958 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1431478 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1431478 /var/tmp/bdevperf.sock 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1431478 ']' 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:52.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:52.959 { 00:27:52.959 "params": { 00:27:52.959 "name": "Nvme$subsystem", 00:27:52.959 "trtype": "$TEST_TRANSPORT", 00:27:52.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.959 "adrfam": "ipv4", 00:27:52.959 "trsvcid": "$NVMF_PORT", 00:27:52.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.959 "hdgst": ${hdgst:-false}, 00:27:52.959 "ddgst": ${ddgst:-false} 00:27:52.959 }, 00:27:52.959 "method": "bdev_nvme_attach_controller" 00:27:52.959 } 00:27:52.959 EOF 00:27:52.959 )") 00:27:52.959 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:52.959 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:52.959 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:52.959 { 00:27:52.959 "params": { 00:27:52.959 "name": "Nvme$subsystem", 00:27:52.959 "trtype": "$TEST_TRANSPORT", 00:27:52.959 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:52.959 "adrfam": "ipv4", 00:27:52.959 "trsvcid": "$NVMF_PORT", 00:27:52.959 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:52.959 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:52.959 "hdgst": ${hdgst:-false}, 00:27:52.959 "ddgst": ${ddgst:-false} 00:27:52.959 }, 00:27:52.959 "method": "bdev_nvme_attach_controller" 00:27:52.959 } 00:27:52.959 EOF 00:27:52.959 )") 00:27:52.959 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.220 { 00:27:53.220 "params": { 00:27:53.220 "name": "Nvme$subsystem", 00:27:53.220 "trtype": "$TEST_TRANSPORT", 00:27:53.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.220 "adrfam": "ipv4", 00:27:53.220 "trsvcid": "$NVMF_PORT", 00:27:53.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.220 "hdgst": ${hdgst:-false}, 00:27:53.220 "ddgst": ${ddgst:-false} 00:27:53.220 }, 00:27:53.220 "method": "bdev_nvme_attach_controller" 00:27:53.220 } 00:27:53.220 EOF 00:27:53.220 )") 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.220 { 00:27:53.220 "params": { 00:27:53.220 "name": "Nvme$subsystem", 00:27:53.220 "trtype": "$TEST_TRANSPORT", 00:27:53.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.220 "adrfam": "ipv4", 00:27:53.220 "trsvcid": "$NVMF_PORT", 00:27:53.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.220 "hdgst": ${hdgst:-false}, 00:27:53.220 "ddgst": ${ddgst:-false} 00:27:53.220 }, 00:27:53.220 "method": "bdev_nvme_attach_controller" 00:27:53.220 } 00:27:53.220 EOF 00:27:53.220 )") 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.220 { 00:27:53.220 "params": { 00:27:53.220 "name": "Nvme$subsystem", 00:27:53.220 "trtype": "$TEST_TRANSPORT", 00:27:53.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.220 "adrfam": "ipv4", 00:27:53.220 "trsvcid": "$NVMF_PORT", 00:27:53.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.220 "hdgst": ${hdgst:-false}, 00:27:53.220 "ddgst": ${ddgst:-false} 00:27:53.220 }, 00:27:53.220 "method": "bdev_nvme_attach_controller" 00:27:53.220 } 00:27:53.220 EOF 00:27:53.220 )") 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.220 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.220 { 00:27:53.220 "params": { 00:27:53.220 "name": "Nvme$subsystem", 00:27:53.220 "trtype": "$TEST_TRANSPORT", 00:27:53.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.220 "adrfam": "ipv4", 00:27:53.220 "trsvcid": "$NVMF_PORT", 00:27:53.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.220 "hdgst": ${hdgst:-false}, 00:27:53.220 "ddgst": ${ddgst:-false} 00:27:53.220 }, 00:27:53.220 "method": "bdev_nvme_attach_controller" 00:27:53.220 } 00:27:53.220 EOF 00:27:53.220 )") 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.221 [2024-12-05 12:11:18.042010] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:53.221 [2024-12-05 12:11:18.042063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1431478 ] 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.221 { 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme$subsystem", 00:27:53.221 "trtype": "$TEST_TRANSPORT", 00:27:53.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "$NVMF_PORT", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.221 "hdgst": ${hdgst:-false}, 00:27:53.221 "ddgst": ${ddgst:-false} 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 } 00:27:53.221 EOF 00:27:53.221 )") 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.221 { 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme$subsystem", 00:27:53.221 "trtype": "$TEST_TRANSPORT", 00:27:53.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "$NVMF_PORT", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.221 "hdgst": ${hdgst:-false}, 00:27:53.221 "ddgst": ${ddgst:-false} 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 } 00:27:53.221 EOF 00:27:53.221 )") 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.221 { 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme$subsystem", 00:27:53.221 "trtype": "$TEST_TRANSPORT", 00:27:53.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "$NVMF_PORT", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.221 "hdgst": ${hdgst:-false}, 00:27:53.221 "ddgst": ${ddgst:-false} 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 } 00:27:53.221 EOF 00:27:53.221 )") 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:27:53.221 { 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme$subsystem", 00:27:53.221 "trtype": "$TEST_TRANSPORT", 00:27:53.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "$NVMF_PORT", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.221 "hdgst": ${hdgst:-false}, 00:27:53.221 "ddgst": ${ddgst:-false} 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 } 00:27:53.221 EOF 00:27:53.221 )") 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:27:53.221 12:11:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme1", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme2", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme3", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme4", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme5", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme6", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme7", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme8", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme9", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 },{ 00:27:53.221 "params": { 00:27:53.221 "name": "Nvme10", 00:27:53.221 "trtype": "tcp", 00:27:53.221 "traddr": "10.0.0.2", 00:27:53.221 "adrfam": "ipv4", 00:27:53.221 "trsvcid": "4420", 00:27:53.221 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:53.221 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:53.221 "hdgst": false, 00:27:53.221 "ddgst": false 00:27:53.221 }, 00:27:53.221 "method": "bdev_nvme_attach_controller" 00:27:53.221 }' 00:27:53.221 [2024-12-05 12:11:18.129342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.221 [2024-12-05 12:11:18.165623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:55.129 Running I/O for 10 seconds... 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:55.739 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1431110 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1431110 ']' 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1431110 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1431110 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1431110' 00:27:55.740 killing process with pid 1431110 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1431110 00:27:55.740 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1431110 00:27:55.740 [2024-12-05 12:11:20.676875] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676936] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676970] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676975] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676989] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.676993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677013] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677027] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677060] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677074] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677079] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677108] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677123] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677133] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677148] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677165] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.740 [2024-12-05 12:11:20.677170] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677175] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.677222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d195f0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678129] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678139] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678186] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678195] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678205] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678253] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678328] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678365] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.741 [2024-12-05 12:11:20.678374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.678423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d479b0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679322] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679410] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679505] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.742 [2024-12-05 12:11:20.679535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.679578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d19ae0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.680918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a4a0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.680940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a4a0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.680947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a4a0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.680952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a4a0 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 [2024-12-05 12:11:20.681356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 [2024-12-05 12:11:20.681363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-05 12:11:20.681384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-05 12:11:20.681403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f960 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681431] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681466] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-05 12:11:20.681479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with [2024-12-05 12:11:20.681496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:27:55.743 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 [2024-12-05 12:11:20.681503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with [2024-12-05 12:11:20.681514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:27:55.743 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 [2024-12-05 12:11:20.681522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with [2024-12-05 12:11:20.681528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(6) to be set 00:27:55.743 id:0 cdw10:00000000 cdw11:00000000 00:27:55.743 [2024-12-05 12:11:20.681536] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.743 [2024-12-05 12:11:20.681541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7850 is same w[2024-12-05 12:11:20.681547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with ith the state(6) to be set 00:27:55.743 the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681555] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.743 [2024-12-05 12:11:20.681586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-12-05 12:11:20.681595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:55.744 the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-12-05 12:11:20.681625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with [2024-12-05 12:11:20.681635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsthe state(6) to be set 00:27:55.744 id:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with [2024-12-05 12:11:20.681661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(6) to be set 00:27:55.744 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6750 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with [2024-12-05 12:11:20.681697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsthe state(6) to be set 00:27:55.744 id:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681705] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-12-05 12:11:20.681738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with id:0 cdw10:00000000 cdw11:00000000 00:27:55.744 the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1a820 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681756] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681771] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5fc0 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.681796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.744 [2024-12-05 12:11:20.681852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.681859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7cc0 is same with the state(6) to be set 00:27:55.744 [2024-12-05 12:11:20.682417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.744 [2024-12-05 12:11:20.682438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.682461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.744 [2024-12-05 12:11:20.682470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.682480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.744 [2024-12-05 12:11:20.682488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.682498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.744 [2024-12-05 12:11:20.682506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.744 [2024-12-05 12:11:20.682515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.745 [2024-12-05 12:11:20.682926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.745 [2024-12-05 12:11:20.682933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.682943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.682951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.682962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.682970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.682979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.682987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.682996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.746 [2024-12-05 12:11:20.683446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.746 [2024-12-05 12:11:20.683459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.747 [2024-12-05 12:11:20.683468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.747 [2024-12-05 12:11:20.683478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.747 [2024-12-05 12:11:20.683485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.747 [2024-12-05 12:11:20.683494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.747 [2024-12-05 12:11:20.683502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.747 [2024-12-05 12:11:20.683511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.747 [2024-12-05 12:11:20.683519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.747 [2024-12-05 12:11:20.683528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.747 [2024-12-05 12:11:20.683535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.747 [2024-12-05 12:11:20.683562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:55.747 [2024-12-05 12:11:20.684309] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684354] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684451] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684459] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684523] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684542] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684565] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.747 [2024-12-05 12:11:20.684622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.748 [2024-12-05 12:11:20.684627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1acf0 is same with the state(6) to be set 00:27:55.748 [2024-12-05 12:11:20.685684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.685985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.685993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.748 [2024-12-05 12:11:20.686273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.748 [2024-12-05 12:11:20.686281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.749 [2024-12-05 12:11:20.686627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.749 [2024-12-05 12:11:20.686963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b1c0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.686986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b1c0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687482] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.749 [2024-12-05 12:11:20.687535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687644] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687650] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687679] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687698] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.687702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d1b6b0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688152] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688235] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688250] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688259] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688283] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.750 [2024-12-05 12:11:20.688292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688321] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688355] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688368] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688374] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688434] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.688442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d474e0 is same with the state(6) to be set 00:27:55.751 [2024-12-05 12:11:20.699923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.699958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.699970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.699978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.699988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.699996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.751 [2024-12-05 12:11:20.700810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.751 [2024-12-05 12:11:20.700825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.700991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.700999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.752 [2024-12-05 12:11:20.701369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.752 [2024-12-05 12:11:20.701379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.753 [2024-12-05 12:11:20.701936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.753 [2024-12-05 12:11:20.701962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:27:55.754 [2024-12-05 12:11:20.702169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:55.754 [2024-12-05 12:11:20.702214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5fc0 (9): Bad file descriptor 00:27:55.754 [2024-12-05 12:11:20.702266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe9b0 is same with the state(6) to be set 00:27:55.754 [2024-12-05 12:11:20.702376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0f960 (9): Bad file descriptor 00:27:55.754 [2024-12-05 12:11:20.702408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef9550 is same with the state(6) to be set 00:27:55.754 [2024-12-05 12:11:20.702513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7850 (9): Bad file descriptor 00:27:55.754 [2024-12-05 12:11:20.702541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2f40 is same with the state(6) to be set 00:27:55.754 [2024-12-05 12:11:20.702633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf610 is same with the state(6) to be set 00:27:55.754 [2024-12-05 12:11:20.702719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:55.754 [2024-12-05 12:11:20.702778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.702785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf05570 is same with the state(6) to be set 00:27:55.754 [2024-12-05 12:11:20.702803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6750 (9): Bad file descriptor 00:27:55.754 [2024-12-05 12:11:20.702818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7cc0 (9): Bad file descriptor 00:27:55.754 [2024-12-05 12:11:20.705492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.754 [2024-12-05 12:11:20.705648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.754 [2024-12-05 12:11:20.705658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.705986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.705996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.755 [2024-12-05 12:11:20.706214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.755 [2024-12-05 12:11:20.706223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.756 [2024-12-05 12:11:20.706817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.756 [2024-12-05 12:11:20.706947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:55.756 [2024-12-05 12:11:20.706970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:55.756 [2024-12-05 12:11:20.706984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bf610 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.708767] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.757 [2024-12-05 12:11:20.709155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.757 [2024-12-05 12:11:20.709174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5fc0 with addr=10.0.0.2, port=4420 00:27:55.757 [2024-12-05 12:11:20.709185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5fc0 is same with the state(6) to be set 00:27:55.757 [2024-12-05 12:11:20.709676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.757 [2024-12-05 12:11:20.709715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa6750 with addr=10.0.0.2, port=4420 00:27:55.757 [2024-12-05 12:11:20.709728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6750 is same with the state(6) to be set 00:27:55.757 [2024-12-05 12:11:20.709804] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.757 [2024-12-05 12:11:20.709852] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.757 [2024-12-05 12:11:20.710186] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.757 [2024-12-05 12:11:20.710588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:55.757 [2024-12-05 12:11:20.710610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe9b0 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.710954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.757 [2024-12-05 12:11:20.710969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bf610 with addr=10.0.0.2, port=4420 00:27:55.757 [2024-12-05 12:11:20.710978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf610 is same with the state(6) to be set 00:27:55.757 [2024-12-05 12:11:20.710989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5fc0 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.711001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6750 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.711087] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.757 [2024-12-05 12:11:20.711135] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:55.757 [2024-12-05 12:11:20.711431] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bf610 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.711444] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:55.757 [2024-12-05 12:11:20.711452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:55.757 [2024-12-05 12:11:20.711470] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:55.757 [2024-12-05 12:11:20.711480] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:55.757 [2024-12-05 12:11:20.711489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:55.757 [2024-12-05 12:11:20.711495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:55.757 [2024-12-05 12:11:20.711503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:55.757 [2024-12-05 12:11:20.711510] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:55.757 [2024-12-05 12:11:20.711900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.757 [2024-12-05 12:11:20.711914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefe9b0 with addr=10.0.0.2, port=4420 00:27:55.757 [2024-12-05 12:11:20.711923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe9b0 is same with the state(6) to be set 00:27:55.757 [2024-12-05 12:11:20.711936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:55.757 [2024-12-05 12:11:20.711944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:55.757 [2024-12-05 12:11:20.711952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:55.757 [2024-12-05 12:11:20.711959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:55.757 [2024-12-05 12:11:20.712005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe9b0 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.712042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:55.757 [2024-12-05 12:11:20.712050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:55.757 [2024-12-05 12:11:20.712059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:55.757 [2024-12-05 12:11:20.712066] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:55.757 [2024-12-05 12:11:20.712221] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef9550 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.712248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2f40 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.712268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf05570 (9): Bad file descriptor 00:27:55.757 [2024-12-05 12:11:20.712370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.757 [2024-12-05 12:11:20.712617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.757 [2024-12-05 12:11:20.712625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.712980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.712990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.758 [2024-12-05 12:11:20.713186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.758 [2024-12-05 12:11:20.713196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.713555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.713564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcabc10 is same with the state(6) to be set 00:27:55.759 [2024-12-05 12:11:20.714850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.714878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.714899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.714920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.714941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.714963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.714988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.714996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.715006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.715013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.715023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.715031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.715040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.715048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.759 [2024-12-05 12:11:20.715059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.759 [2024-12-05 12:11:20.715066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.760 [2024-12-05 12:11:20.715621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.760 [2024-12-05 12:11:20.715628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.715985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.715993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.716003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.716011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.716021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.716031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.716040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcacd00 is same with the state(6) to be set 00:27:55.761 [2024-12-05 12:11:20.717342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.761 [2024-12-05 12:11:20.717531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.761 [2024-12-05 12:11:20.717538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.717983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.717991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.718000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.718008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.718018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.762 [2024-12-05 12:11:20.718026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.762 [2024-12-05 12:11:20.718036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.763 [2024-12-05 12:11:20.718522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.763 [2024-12-05 12:11:20.718531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce8330 is same with the state(6) to be set 00:27:55.763 [2024-12-05 12:11:20.719779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:55.763 [2024-12-05 12:11:20.719797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:55.763 [2024-12-05 12:11:20.719812] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:55.763 [2024-12-05 12:11:20.720239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.763 [2024-12-05 12:11:20.720256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7cc0 with addr=10.0.0.2, port=4420 00:27:55.763 [2024-12-05 12:11:20.720271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7cc0 is same with the state(6) to be set 00:27:55.763 [2024-12-05 12:11:20.720714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.763 [2024-12-05 12:11:20.720756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7850 with addr=10.0.0.2, port=4420 00:27:55.763 [2024-12-05 12:11:20.720769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7850 is same with the state(6) to be set 00:27:55.763 [2024-12-05 12:11:20.721148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.763 [2024-12-05 12:11:20.721161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0f960 with addr=10.0.0.2, port=4420 00:27:55.764 [2024-12-05 12:11:20.721169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f960 is same with the state(6) to be set 00:27:55.764 [2024-12-05 12:11:20.722029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:55.764 [2024-12-05 12:11:20.722048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:55.764 [2024-12-05 12:11:20.722082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7cc0 (9): Bad file descriptor 00:27:55.764 [2024-12-05 12:11:20.722094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7850 (9): Bad file descriptor 00:27:55.764 [2024-12-05 12:11:20.722104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0f960 (9): Bad file descriptor 00:27:55.764 [2024-12-05 12:11:20.722159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:55.764 [2024-12-05 12:11:20.722324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.764 [2024-12-05 12:11:20.722338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa6750 with addr=10.0.0.2, port=4420 00:27:55.764 [2024-12-05 12:11:20.722346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6750 is same with the state(6) to be set 00:27:55.764 [2024-12-05 12:11:20.722655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.764 [2024-12-05 12:11:20.722667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5fc0 with addr=10.0.0.2, port=4420 00:27:55.764 [2024-12-05 12:11:20.722675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5fc0 is same with the state(6) to be set 00:27:55.764 [2024-12-05 12:11:20.722683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:55.764 [2024-12-05 12:11:20.722690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:55.764 [2024-12-05 12:11:20.722698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:55.764 [2024-12-05 12:11:20.722706] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:55.764 [2024-12-05 12:11:20.722715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:55.764 [2024-12-05 12:11:20.722721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:55.764 [2024-12-05 12:11:20.722728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:55.764 [2024-12-05 12:11:20.722735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:55.764 [2024-12-05 12:11:20.722742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:55.764 [2024-12-05 12:11:20.722748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:55.764 [2024-12-05 12:11:20.722761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:55.764 [2024-12-05 12:11:20.722768] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:55.764 [2024-12-05 12:11:20.722819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:55.764 [2024-12-05 12:11:20.723174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.764 [2024-12-05 12:11:20.723187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bf610 with addr=10.0.0.2, port=4420 00:27:55.764 [2024-12-05 12:11:20.723195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf610 is same with the state(6) to be set 00:27:55.764 [2024-12-05 12:11:20.723204] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6750 (9): Bad file descriptor 00:27:55.764 [2024-12-05 12:11:20.723213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5fc0 (9): Bad file descriptor 00:27:55.764 [2024-12-05 12:11:20.723453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:55.764 [2024-12-05 12:11:20.723474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefe9b0 with addr=10.0.0.2, port=4420 00:27:55.764 [2024-12-05 12:11:20.723481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe9b0 is same with the state(6) to be set 00:27:55.764 [2024-12-05 12:11:20.723490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bf610 (9): Bad file descriptor 00:27:55.764 [2024-12-05 12:11:20.723498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:55.764 [2024-12-05 12:11:20.723504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:55.764 [2024-12-05 12:11:20.723512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:55.764 [2024-12-05 12:11:20.723518] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:55.764 [2024-12-05 12:11:20.723526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:55.764 [2024-12-05 12:11:20.723532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:55.764 [2024-12-05 12:11:20.723539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:55.764 [2024-12-05 12:11:20.723545] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:55.764 [2024-12-05 12:11:20.723588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.764 [2024-12-05 12:11:20.723785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.764 [2024-12-05 12:11:20.723796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.723987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.723998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.765 [2024-12-05 12:11:20.724342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.765 [2024-12-05 12:11:20.724351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.724756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.724764] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeaa1c0 is same with the state(6) to be set 00:27:55.766 [2024-12-05 12:11:20.726056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.766 [2024-12-05 12:11:20.726232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.766 [2024-12-05 12:11:20.726242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.767 [2024-12-05 12:11:20.726732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.767 [2024-12-05 12:11:20.726742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.726983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.726994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.727242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.727250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeab3c0 is same with the state(6) to be set 00:27:55.768 [2024-12-05 12:11:20.728544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.728559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.728571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.728581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.728592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.728601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.768 [2024-12-05 12:11:20.728612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.768 [2024-12-05 12:11:20.728619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.728988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.728996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.769 [2024-12-05 12:11:20.729178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.769 [2024-12-05 12:11:20.729189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:55.770 [2024-12-05 12:11:20.729629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:55.770 [2024-12-05 12:11:20.729637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xead230 is same with the state(6) to be set 00:27:55.770 [2024-12-05 12:11:20.732054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:27:55.770 [2024-12-05 12:11:20.732080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:27:56.031 task offset: 30336 on job bdev=Nvme3n1 fails 00:27:56.031 00:27:56.031 Latency(us) 00:27:56.031 [2024-12-05T11:11:21.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.031 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme1n1 ended in about 0.88 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme1n1 : 0.88 146.16 9.14 73.08 0.00 288366.93 20862.29 251658.24 00:27:56.031 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme2n1 ended in about 0.88 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme2n1 : 0.88 145.75 9.11 72.88 0.00 282876.30 20643.84 246415.36 00:27:56.031 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme3n1 ended in about 0.85 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme3n1 : 0.85 226.77 14.17 75.59 0.00 199458.16 2430.29 241172.48 00:27:56.031 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme4n1 ended in about 0.87 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme4n1 : 0.87 221.94 13.87 73.98 0.00 199358.93 15619.41 249910.61 00:27:56.031 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme5n1 ended in about 0.89 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme5n1 : 0.89 144.32 9.02 72.16 0.00 266878.86 16384.00 249910.61 00:27:56.031 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme6n1 ended in about 0.89 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme6n1 : 0.89 149.54 9.35 71.96 0.00 254755.12 20097.71 232434.35 00:27:56.031 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme7n1 ended in about 0.87 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme7n1 : 0.87 221.61 13.85 73.87 0.00 185443.89 3768.32 244667.73 00:27:56.031 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme8n1 ended in about 0.89 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme8n1 : 0.89 149.15 9.32 66.16 0.00 248786.20 21408.43 258648.75 00:27:56.031 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme9n1 ended in about 0.87 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme9n1 : 0.87 147.18 9.20 73.59 0.00 235835.24 5734.40 269134.51 00:27:56.031 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:56.031 Job: Nvme10n1 ended in about 0.88 seconds with error 00:27:56.031 Verification LBA range: start 0x0 length 0x400 00:27:56.031 Nvme10n1 : 0.88 145.34 9.08 72.67 0.00 233089.14 19005.44 246415.36 00:27:56.031 [2024-12-05T11:11:21.080Z] =================================================================================================================== 00:27:56.031 [2024-12-05T11:11:21.080Z] Total : 1697.76 106.11 725.94 0.00 235464.07 2430.29 269134.51 00:27:56.031 [2024-12-05 12:11:20.759560] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:56.031 [2024-12-05 12:11:20.759593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.759647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe9b0 (9): Bad file descriptor 00:27:56.031 [2024-12-05 12:11:20.759660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:56.031 [2024-12-05 12:11:20.759668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:56.031 [2024-12-05 12:11:20.759677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:56.031 [2024-12-05 12:11:20.759685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:56.031 [2024-12-05 12:11:20.760475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.031 [2024-12-05 12:11:20.760497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xed2f40 with addr=10.0.0.2, port=4420 00:27:56.031 [2024-12-05 12:11:20.760507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xed2f40 is same with the state(6) to be set 00:27:56.031 [2024-12-05 12:11:20.760858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.031 [2024-12-05 12:11:20.760870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xef9550 with addr=10.0.0.2, port=4420 00:27:56.031 [2024-12-05 12:11:20.760881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xef9550 is same with the state(6) to be set 00:27:56.031 [2024-12-05 12:11:20.761072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.031 [2024-12-05 12:11:20.761083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf05570 with addr=10.0.0.2, port=4420 00:27:56.031 [2024-12-05 12:11:20.761090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf05570 is same with the state(6) to be set 00:27:56.031 [2024-12-05 12:11:20.761099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:56.031 [2024-12-05 12:11:20.761106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:56.031 [2024-12-05 12:11:20.761113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:56.031 [2024-12-05 12:11:20.761121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:56.031 [2024-12-05 12:11:20.761168] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:27:56.031 [2024-12-05 12:11:20.761182] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:27:56.031 [2024-12-05 12:11:20.761193] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:27:56.031 [2024-12-05 12:11:20.762035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.762049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.762059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.762116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xed2f40 (9): Bad file descriptor 00:27:56.031 [2024-12-05 12:11:20.762128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xef9550 (9): Bad file descriptor 00:27:56.031 [2024-12-05 12:11:20.762138] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf05570 (9): Bad file descriptor 00:27:56.031 [2024-12-05 12:11:20.762191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.762202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.762212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:27:56.031 [2024-12-05 12:11:20.762221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:27:56.032 [2024-12-05 12:11:20.762520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.762550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf0f960 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.762558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf0f960 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.762917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.762928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7850 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.762936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7850 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.763259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.763271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa7cc0 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.763282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa7cc0 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.763290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.763296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.763304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.763311] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.763319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.763325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.763332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.763339] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.763347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.763353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.763360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.763367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.763613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.763626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa5fc0 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.763634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa5fc0 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.763830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.763841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xaa6750 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.763849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xaa6750 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.764028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.764039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9bf610 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.764047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9bf610 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.764372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:56.032 [2024-12-05 12:11:20.764383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xefe9b0 with addr=10.0.0.2, port=4420 00:27:56.032 [2024-12-05 12:11:20.764390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xefe9b0 is same with the state(6) to be set 00:27:56.032 [2024-12-05 12:11:20.764400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf0f960 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7850 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa7cc0 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa5fc0 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaa6750 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9bf610 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefe9b0 (9): Bad file descriptor 00:27:56.032 [2024-12-05 12:11:20.764494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.764522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764542] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.764549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764562] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764570] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.764605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.764636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.764665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764685] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:27:56.032 [2024-12-05 12:11:20.764693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:27:56.032 [2024-12-05 12:11:20.764699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:27:56.032 [2024-12-05 12:11:20.764709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:27:56.032 [2024-12-05 12:11:20.764717] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:27:56.032 12:11:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1431478 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1431478 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1431478 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:56.973 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:56.973 rmmod nvme_tcp 00:27:56.974 rmmod nvme_fabrics 00:27:56.974 rmmod nvme_keyring 00:27:56.974 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:56.974 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:27:56.974 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:27:56.974 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 1431110 ']' 00:27:56.974 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 1431110 00:27:56.974 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1431110 ']' 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1431110 00:27:57.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1431110) - No such process 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1431110 is not found' 00:27:57.235 Process with pid 1431110 is not found 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@254 -- # local dev 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:57.235 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # return 0 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@274 -- # iptr 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-save 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-restore 00:27:59.143 00:27:59.143 real 0m8.012s 00:27:59.143 user 0m19.827s 00:27:59.143 sys 0m1.310s 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:59.143 ************************************ 00:27:59.143 END TEST nvmf_shutdown_tc3 00:27:59.143 ************************************ 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:59.143 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:59.404 ************************************ 00:27:59.404 START TEST nvmf_shutdown_tc4 00:27:59.404 ************************************ 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:59.404 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:59.405 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:59.405 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:59.405 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:59.405 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@247 -- # create_target_ns 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:59.405 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:59.406 10.0.0.1 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:59.406 10.0.0.2 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:59.406 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:59.668 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:59.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:59.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.667 ms 00:27:59.669 00:27:59.669 --- 10.0.0.1 ping statistics --- 00:27:59.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.669 rtt min/avg/max/mdev = 0.667/0.667/0.667/0.000 ms 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:59.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:59.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:27:59.669 00:27:59.669 --- 10.0.0.2 ping statistics --- 00:27:59.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:59.669 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:27:59.669 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:27:59.670 ' 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=1432950 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 1432950 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1432950 ']' 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:59.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:59.670 12:11:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:27:59.930 [2024-12-05 12:11:24.778300] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:27:59.930 [2024-12-05 12:11:24.778367] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:59.930 [2024-12-05 12:11:24.872158] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:59.930 [2024-12-05 12:11:24.906270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:59.930 [2024-12-05 12:11:24.906299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:59.930 [2024-12-05 12:11:24.906305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:59.930 [2024-12-05 12:11:24.906310] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:59.930 [2024-12-05 12:11:24.906317] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:59.930 [2024-12-05 12:11:24.907896] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:59.930 [2024-12-05 12:11:24.908049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:59.930 [2024-12-05 12:11:24.908201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:59.930 [2024-12-05 12:11:24.908202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 [2024-12-05 12:11:25.615766] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.869 12:11:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:00.869 Malloc1 00:28:00.869 [2024-12-05 12:11:25.729111] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.869 Malloc2 00:28:00.869 Malloc3 00:28:00.869 Malloc4 00:28:00.869 Malloc5 00:28:00.869 Malloc6 00:28:01.130 Malloc7 00:28:01.130 Malloc8 00:28:01.130 Malloc9 00:28:01.130 Malloc10 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1433332 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:01.130 12:11:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:01.390 [2024-12-05 12:11:26.207936] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1432950 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1432950 ']' 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1432950 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1432950 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1432950' 00:28:06.673 killing process with pid 1432950 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1432950 00:28:06.673 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1432950 00:28:06.673 [2024-12-05 12:11:31.204760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2376fd0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.204873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23774a0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.204903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23774a0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.204908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23774a0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.204913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23774a0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205248] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377840 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a7b70 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.205593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23a7b70 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379430 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379900 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379900 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379900 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379900 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379900 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379900 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379dd0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379dd0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379dd0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.209910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2379dd0 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.210149] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378f60 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.210166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378f60 is same with the state(6) to be set 00:28:06.673 [2024-12-05 12:11:31.210172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2378f60 is same with the state(6) to be set 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 starting I/O failed: -6 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 starting I/O failed: -6 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 starting I/O failed: -6 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 starting I/O failed: -6 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 starting I/O failed: -6 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.673 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 [2024-12-05 12:11:31.210841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.674 starting I/O failed: -6 00:28:06.674 starting I/O failed: -6 00:28:06.674 starting I/O failed: -6 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 [2024-12-05 12:11:31.211699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23781e0 is same with Write completed with error (sct=0, sc=8) 00:28:06.674 the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23781e0 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23781e0 is same with the state(6) to be set 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 [2024-12-05 12:11:31.211725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23781e0 is same with the state(6) to be set 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 [2024-12-05 12:11:31.211947] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23786d0 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23786d0 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23786d0 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.674 [2024-12-05 12:11:31.211972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23786d0 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23786d0 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.211982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23786d0 is same with the state(6) to be set 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 [2024-12-05 12:11:31.212382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377d10 is same with Write completed with error (sct=0, sc=8) 00:28:06.674 the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.212400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377d10 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.212405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377d10 is same with the state(6) to be set 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 [2024-12-05 12:11:31.212411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377d10 is same with the state(6) to be set 00:28:06.674 starting I/O failed: -6 00:28:06.674 [2024-12-05 12:11:31.212416] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377d10 is same with the state(6) to be set 00:28:06.674 [2024-12-05 12:11:31.212422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2377d10 is same with the state(6) to be set 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 [2024-12-05 12:11:31.212866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.674 starting I/O failed: -6 00:28:06.674 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 [2024-12-05 12:11:31.214285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.675 NVMe io qpair process completion error 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 [2024-12-05 12:11:31.215390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.675 Write completed with error (sct=0, sc=8) 00:28:06.675 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 [2024-12-05 12:11:31.216207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 [2024-12-05 12:11:31.217163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 [2024-12-05 12:11:31.218814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.676 NVMe io qpair process completion error 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.676 starting I/O failed: -6 00:28:06.676 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 [2024-12-05 12:11:31.220069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 [2024-12-05 12:11:31.220960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.677 starting I/O failed: -6 00:28:06.677 starting I/O failed: -6 00:28:06.677 starting I/O failed: -6 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 [2024-12-05 12:11:31.221947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.677 Write completed with error (sct=0, sc=8) 00:28:06.677 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 [2024-12-05 12:11:31.224805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.678 NVMe io qpair process completion error 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 [2024-12-05 12:11:31.225802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 starting I/O failed: -6 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.678 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 [2024-12-05 12:11:31.226638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 [2024-12-05 12:11:31.227592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.679 Write completed with error (sct=0, sc=8) 00:28:06.679 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 [2024-12-05 12:11:31.229030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.680 NVMe io qpair process completion error 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 [2024-12-05 12:11:31.230199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.680 starting I/O failed: -6 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 [2024-12-05 12:11:31.231048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 [2024-12-05 12:11:31.232207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.680 Write completed with error (sct=0, sc=8) 00:28:06.680 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 [2024-12-05 12:11:31.235140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.681 NVMe io qpair process completion error 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 [2024-12-05 12:11:31.236293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.681 Write completed with error (sct=0, sc=8) 00:28:06.681 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 [2024-12-05 12:11:31.237098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 [2024-12-05 12:11:31.238023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.682 Write completed with error (sct=0, sc=8) 00:28:06.682 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 [2024-12-05 12:11:31.239648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.683 NVMe io qpair process completion error 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 [2024-12-05 12:11:31.240725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 [2024-12-05 12:11:31.241541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.683 starting I/O failed: -6 00:28:06.683 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 [2024-12-05 12:11:31.242479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 [2024-12-05 12:11:31.245077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.684 NVMe io qpair process completion error 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 [2024-12-05 12:11:31.246386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.684 starting I/O failed: -6 00:28:06.684 starting I/O failed: -6 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 Write completed with error (sct=0, sc=8) 00:28:06.684 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 [2024-12-05 12:11:31.247373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 [2024-12-05 12:11:31.248319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.685 starting I/O failed: -6 00:28:06.685 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 [2024-12-05 12:11:31.250482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.686 NVMe io qpair process completion error 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 [2024-12-05 12:11:31.251853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 [2024-12-05 12:11:31.252759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.686 Write completed with error (sct=0, sc=8) 00:28:06.686 starting I/O failed: -6 00:28:06.687 [2024-12-05 12:11:31.253671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 [2024-12-05 12:11:31.255094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:06.687 NVMe io qpair process completion error 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.687 starting I/O failed: -6 00:28:06.687 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 [2024-12-05 12:11:31.257688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:28:06.688 starting I/O failed: -6 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.688 Write completed with error (sct=0, sc=8) 00:28:06.688 starting I/O failed: -6 00:28:06.689 Write completed with error (sct=0, sc=8) 00:28:06.689 starting I/O failed: -6 00:28:06.689 Write completed with error (sct=0, sc=8) 00:28:06.689 starting I/O failed: -6 00:28:06.689 Write completed with error (sct=0, sc=8) 00:28:06.689 starting I/O failed: -6 00:28:06.689 Write completed with error (sct=0, sc=8) 00:28:06.689 starting I/O failed: -6 00:28:06.689 Write completed with error (sct=0, sc=8) 00:28:06.689 starting I/O failed: -6 00:28:06.689 Write completed with error (sct=0, sc=8) 00:28:06.689 starting I/O failed: -6 00:28:06.689 [2024-12-05 12:11:31.262031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:28:06.689 NVMe io qpair process completion error 00:28:06.689 Initializing NVMe Controllers 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:28:06.689 Controller IO queue size 128, less than required. 00:28:06.689 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:28:06.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:28:06.689 Initialization complete. Launching workers. 00:28:06.689 ======================================================== 00:28:06.689 Latency(us) 00:28:06.689 Device Information : IOPS MiB/s Average min max 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1839.41 79.04 69610.19 565.74 127577.48 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1855.59 79.73 69035.23 904.81 134894.89 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1889.89 81.21 67796.91 788.50 119565.33 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1859.00 79.88 68221.70 576.49 119597.74 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1865.18 80.14 68013.65 647.84 130096.98 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1828.12 78.55 69418.39 729.07 128166.65 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1850.27 79.50 68624.04 713.47 128832.10 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1842.39 79.17 68938.97 686.38 121164.33 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1859.85 79.92 68332.60 839.40 128336.57 00:28:06.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1866.88 80.22 68095.35 673.14 128484.68 00:28:06.689 ======================================================== 00:28:06.689 Total : 18556.59 797.35 68604.14 565.74 134894.89 00:28:06.689 00:28:06.689 [2024-12-05 12:11:31.265277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2186ae0 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2185740 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184ef0 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2186900 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2186720 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184890 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184560 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2185a70 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2184bc0 is same with the state(6) to be set 00:28:06.689 [2024-12-05 12:11:31.265579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2185410 is same with the state(6) to be set 00:28:06.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:06.689 12:11:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1433332 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1433332 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1433332 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:07.631 rmmod nvme_tcp 00:28:07.631 rmmod nvme_fabrics 00:28:07.631 rmmod nvme_keyring 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 1432950 ']' 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 1432950 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1432950 ']' 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1432950 00:28:07.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1432950) - No such process 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1432950 is not found' 00:28:07.631 Process with pid 1432950 is not found 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@254 -- # local dev 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:07.631 12:11:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # return 0 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:09.545 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@274 -- # iptr 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-save 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-restore 00:28:09.806 00:28:09.806 real 0m10.401s 00:28:09.806 user 0m28.001s 00:28:09.806 sys 0m4.042s 00:28:09.806 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:09.807 ************************************ 00:28:09.807 END TEST nvmf_shutdown_tc4 00:28:09.807 ************************************ 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:28:09.807 00:28:09.807 real 0m43.980s 00:28:09.807 user 1m45.537s 00:28:09.807 sys 0m14.124s 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:09.807 ************************************ 00:28:09.807 END TEST nvmf_shutdown 00:28:09.807 ************************************ 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:09.807 ************************************ 00:28:09.807 START TEST nvmf_nsid 00:28:09.807 ************************************ 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:28:09.807 * Looking for test storage... 00:28:09.807 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:28:09.807 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:10.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.069 --rc genhtml_branch_coverage=1 00:28:10.069 --rc genhtml_function_coverage=1 00:28:10.069 --rc genhtml_legend=1 00:28:10.069 --rc geninfo_all_blocks=1 00:28:10.069 --rc geninfo_unexecuted_blocks=1 00:28:10.069 00:28:10.069 ' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:10.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.069 --rc genhtml_branch_coverage=1 00:28:10.069 --rc genhtml_function_coverage=1 00:28:10.069 --rc genhtml_legend=1 00:28:10.069 --rc geninfo_all_blocks=1 00:28:10.069 --rc geninfo_unexecuted_blocks=1 00:28:10.069 00:28:10.069 ' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:10.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.069 --rc genhtml_branch_coverage=1 00:28:10.069 --rc genhtml_function_coverage=1 00:28:10.069 --rc genhtml_legend=1 00:28:10.069 --rc geninfo_all_blocks=1 00:28:10.069 --rc geninfo_unexecuted_blocks=1 00:28:10.069 00:28:10.069 ' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:10.069 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:10.069 --rc genhtml_branch_coverage=1 00:28:10.069 --rc genhtml_function_coverage=1 00:28:10.069 --rc genhtml_legend=1 00:28:10.069 --rc geninfo_all_blocks=1 00:28:10.069 --rc geninfo_unexecuted_blocks=1 00:28:10.069 00:28:10.069 ' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:10.069 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:10.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:28:10.070 12:11:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:18.209 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:18.209 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:18.209 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:18.209 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@247 -- # create_target_ns 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:18.209 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:18.210 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:18.210 10.0.0.1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:18.210 10.0.0.2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:18.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:18.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.651 ms 00:28:18.210 00:28:18.210 --- 10.0.0.1 ping statistics --- 00:28:18.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.210 rtt min/avg/max/mdev = 0.651/0.651/0.651/0.000 ms 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:18.210 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:18.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:18.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:28:18.210 00:28:18.210 --- 10.0.0.2 ping statistics --- 00:28:18.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:18.211 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:18.211 ' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=1438713 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 1438713 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1438713 ']' 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:18.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.211 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:18.211 [2024-12-05 12:11:42.460821] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:28:18.211 [2024-12-05 12:11:42.460886] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:18.211 [2024-12-05 12:11:42.559491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.211 [2024-12-05 12:11:42.594032] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:18.211 [2024-12-05 12:11:42.594067] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:18.211 [2024-12-05 12:11:42.594075] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:18.211 [2024-12-05 12:11:42.594081] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:18.211 [2024-12-05 12:11:42.594087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:18.211 [2024-12-05 12:11:42.594676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1438840 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=1ca0c3dd-b7e1-4e08-b542-cff785965840 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=347ed81f-4e99-49e7-a97f-ac8768801dd9 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=1afe96d4-1394-4b68-80c5-faf8f7a07555 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:18.473 null0 00:28:18.473 null1 00:28:18.473 [2024-12-05 12:11:43.376966] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:28:18.473 [2024-12-05 12:11:43.377018] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1438840 ] 00:28:18.473 null2 00:28:18.473 [2024-12-05 12:11:43.384357] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.473 [2024-12-05 12:11:43.408576] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:18.473 [2024-12-05 12:11:43.437358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1438840 /var/tmp/tgt2.sock 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1438840 ']' 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:28:18.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:18.473 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:18.473 [2024-12-05 12:11:43.467262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:18.733 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:18.733 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:28:18.733 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:28:18.993 [2024-12-05 12:11:43.922146] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:18.993 [2024-12-05 12:11:43.938251] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:28:18.993 nvme0n1 nvme0n2 00:28:18.993 nvme1n1 00:28:18.993 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:28:18.993 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:28:18.993 12:11:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:28:20.375 12:11:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:28:21.758 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:21.758 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:28:21.758 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:21.758 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:28:21.758 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 1ca0c3dd-b7e1-4e08-b542-cff785965840 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1ca0c3ddb7e14e08b542cff785965840 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1CA0C3DDB7E14E08B542CFF785965840 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 1CA0C3DDB7E14E08B542CFF785965840 == \1\C\A\0\C\3\D\D\B\7\E\1\4\E\0\8\B\5\4\2\C\F\F\7\8\5\9\6\5\8\4\0 ]] 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 347ed81f-4e99-49e7-a97f-ac8768801dd9 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=347ed81f4e9949e7a97fac8768801dd9 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 347ED81F4E9949E7A97FAC8768801DD9 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 347ED81F4E9949E7A97FAC8768801DD9 == \3\4\7\E\D\8\1\F\4\E\9\9\4\9\E\7\A\9\7\F\A\C\8\7\6\8\8\0\1\D\D\9 ]] 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 1afe96d4-1394-4b68-80c5-faf8f7a07555 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=1afe96d413944b6880c5faf8f7a07555 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 1AFE96D413944B6880C5FAF8F7A07555 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 1AFE96D413944B6880C5FAF8F7A07555 == \1\A\F\E\9\6\D\4\1\3\9\4\4\B\6\8\8\0\C\5\F\A\F\8\F\7\A\0\7\5\5\5 ]] 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1438840 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1438840 ']' 00:28:21.759 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1438840 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438840 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438840' 00:28:22.019 killing process with pid 1438840 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1438840 00:28:22.019 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1438840 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:22.280 rmmod nvme_tcp 00:28:22.280 rmmod nvme_fabrics 00:28:22.280 rmmod nvme_keyring 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 1438713 ']' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 1438713 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1438713 ']' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1438713 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1438713 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1438713' 00:28:22.280 killing process with pid 1438713 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1438713 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1438713 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:22.280 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:28:24.828 00:28:24.828 real 0m14.665s 00:28:24.828 user 0m11.176s 00:28:24.828 sys 0m6.558s 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:28:24.828 ************************************ 00:28:24.828 END TEST nvmf_nsid 00:28:24.828 ************************************ 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:24.828 00:28:24.828 real 13m0.547s 00:28:24.828 user 27m5.219s 00:28:24.828 sys 3m52.877s 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.828 12:11:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:24.828 ************************************ 00:28:24.828 END TEST nvmf_target_extra 00:28:24.828 ************************************ 00:28:24.828 12:11:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:24.828 12:11:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.828 12:11:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.828 12:11:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:24.828 ************************************ 00:28:24.828 START TEST nvmf_host 00:28:24.828 ************************************ 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:28:24.828 * Looking for test storage... 00:28:24.828 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.828 --rc genhtml_branch_coverage=1 00:28:24.828 --rc genhtml_function_coverage=1 00:28:24.828 --rc genhtml_legend=1 00:28:24.828 --rc geninfo_all_blocks=1 00:28:24.828 --rc geninfo_unexecuted_blocks=1 00:28:24.828 00:28:24.828 ' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.828 --rc genhtml_branch_coverage=1 00:28:24.828 --rc genhtml_function_coverage=1 00:28:24.828 --rc genhtml_legend=1 00:28:24.828 --rc geninfo_all_blocks=1 00:28:24.828 --rc geninfo_unexecuted_blocks=1 00:28:24.828 00:28:24.828 ' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.828 --rc genhtml_branch_coverage=1 00:28:24.828 --rc genhtml_function_coverage=1 00:28:24.828 --rc genhtml_legend=1 00:28:24.828 --rc geninfo_all_blocks=1 00:28:24.828 --rc geninfo_unexecuted_blocks=1 00:28:24.828 00:28:24.828 ' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:24.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:24.828 --rc genhtml_branch_coverage=1 00:28:24.828 --rc genhtml_function_coverage=1 00:28:24.828 --rc genhtml_legend=1 00:28:24.828 --rc geninfo_all_blocks=1 00:28:24.828 --rc geninfo_unexecuted_blocks=1 00:28:24.828 00:28:24.828 ' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:24.828 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:24.829 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.829 ************************************ 00:28:24.829 START TEST nvmf_multicontroller 00:28:24.829 ************************************ 00:28:24.829 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:25.091 * Looking for test storage... 00:28:25.091 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.091 --rc genhtml_branch_coverage=1 00:28:25.091 --rc genhtml_function_coverage=1 00:28:25.091 --rc genhtml_legend=1 00:28:25.091 --rc geninfo_all_blocks=1 00:28:25.091 --rc geninfo_unexecuted_blocks=1 00:28:25.091 00:28:25.091 ' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.091 --rc genhtml_branch_coverage=1 00:28:25.091 --rc genhtml_function_coverage=1 00:28:25.091 --rc genhtml_legend=1 00:28:25.091 --rc geninfo_all_blocks=1 00:28:25.091 --rc geninfo_unexecuted_blocks=1 00:28:25.091 00:28:25.091 ' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.091 --rc genhtml_branch_coverage=1 00:28:25.091 --rc genhtml_function_coverage=1 00:28:25.091 --rc genhtml_legend=1 00:28:25.091 --rc geninfo_all_blocks=1 00:28:25.091 --rc geninfo_unexecuted_blocks=1 00:28:25.091 00:28:25.091 ' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:25.091 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:25.091 --rc genhtml_branch_coverage=1 00:28:25.091 --rc genhtml_function_coverage=1 00:28:25.091 --rc genhtml_legend=1 00:28:25.091 --rc geninfo_all_blocks=1 00:28:25.091 --rc geninfo_unexecuted_blocks=1 00:28:25.091 00:28:25.091 ' 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.091 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:25.092 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.092 12:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:25.092 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:28:25.092 12:11:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:33.240 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:33.240 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:33.240 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:33.241 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:33.241 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@247 -- # create_target_ns 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:33.241 10.0.0.1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:33.241 10.0.0.2 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:33.241 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:33.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.711 ms 00:28:33.242 00:28:33.242 --- 10.0.0.1 ping statistics --- 00:28:33.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.242 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:33.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:28:33.242 00:28:33.242 --- 10.0.0.2 ping statistics --- 00:28:33.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.242 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:33.242 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:33.243 ' 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=1444031 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 1444031 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1444031 ']' 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.243 12:11:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.243 [2024-12-05 12:11:57.750117] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:28:33.243 [2024-12-05 12:11:57.750180] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.243 [2024-12-05 12:11:57.848972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:33.243 [2024-12-05 12:11:57.903207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.243 [2024-12-05 12:11:57.903257] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.243 [2024-12-05 12:11:57.903266] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.243 [2024-12-05 12:11:57.903273] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.243 [2024-12-05 12:11:57.903279] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.243 [2024-12-05 12:11:57.905382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.243 [2024-12-05 12:11:57.905529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.243 [2024-12-05 12:11:57.905556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 [2024-12-05 12:11:58.630215] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 Malloc0 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 [2024-12-05 12:11:58.700651] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 [2024-12-05 12:11:58.712556] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 Malloc1 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.816 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1444221 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1444221 /var/tmp/bdevperf.sock 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1444221 ']' 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:33.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:33.817 12:11:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:34.761 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.761 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:34.761 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:34.761 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.761 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.023 NVMe0n1 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.023 1 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:35.023 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.024 request: 00:28:35.024 { 00:28:35.024 "name": "NVMe0", 00:28:35.024 "trtype": "tcp", 00:28:35.024 "traddr": "10.0.0.2", 00:28:35.024 "adrfam": "ipv4", 00:28:35.024 "trsvcid": "4420", 00:28:35.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.024 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:35.024 "hostaddr": "10.0.0.1", 00:28:35.024 "prchk_reftag": false, 00:28:35.024 "prchk_guard": false, 00:28:35.024 "hdgst": false, 00:28:35.024 "ddgst": false, 00:28:35.024 "allow_unrecognized_csi": false, 00:28:35.024 "method": "bdev_nvme_attach_controller", 00:28:35.024 "req_id": 1 00:28:35.024 } 00:28:35.024 Got JSON-RPC error response 00:28:35.024 response: 00:28:35.024 { 00:28:35.024 "code": -114, 00:28:35.024 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:35.024 } 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.024 request: 00:28:35.024 { 00:28:35.024 "name": "NVMe0", 00:28:35.024 "trtype": "tcp", 00:28:35.024 "traddr": "10.0.0.2", 00:28:35.024 "adrfam": "ipv4", 00:28:35.024 "trsvcid": "4420", 00:28:35.024 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:35.024 "hostaddr": "10.0.0.1", 00:28:35.024 "prchk_reftag": false, 00:28:35.024 "prchk_guard": false, 00:28:35.024 "hdgst": false, 00:28:35.024 "ddgst": false, 00:28:35.024 "allow_unrecognized_csi": false, 00:28:35.024 "method": "bdev_nvme_attach_controller", 00:28:35.024 "req_id": 1 00:28:35.024 } 00:28:35.024 Got JSON-RPC error response 00:28:35.024 response: 00:28:35.024 { 00:28:35.024 "code": -114, 00:28:35.024 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:35.024 } 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.024 request: 00:28:35.024 { 00:28:35.024 "name": "NVMe0", 00:28:35.024 "trtype": "tcp", 00:28:35.024 "traddr": "10.0.0.2", 00:28:35.024 "adrfam": "ipv4", 00:28:35.024 "trsvcid": "4420", 00:28:35.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.024 "hostaddr": "10.0.0.1", 00:28:35.024 "prchk_reftag": false, 00:28:35.024 "prchk_guard": false, 00:28:35.024 "hdgst": false, 00:28:35.024 "ddgst": false, 00:28:35.024 "multipath": "disable", 00:28:35.024 "allow_unrecognized_csi": false, 00:28:35.024 "method": "bdev_nvme_attach_controller", 00:28:35.024 "req_id": 1 00:28:35.024 } 00:28:35.024 Got JSON-RPC error response 00:28:35.024 response: 00:28:35.024 { 00:28:35.024 "code": -114, 00:28:35.024 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:35.024 } 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.024 request: 00:28:35.024 { 00:28:35.024 "name": "NVMe0", 00:28:35.024 "trtype": "tcp", 00:28:35.024 "traddr": "10.0.0.2", 00:28:35.024 "adrfam": "ipv4", 00:28:35.024 "trsvcid": "4420", 00:28:35.024 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:35.024 "hostaddr": "10.0.0.1", 00:28:35.024 "prchk_reftag": false, 00:28:35.024 "prchk_guard": false, 00:28:35.024 "hdgst": false, 00:28:35.024 "ddgst": false, 00:28:35.024 "multipath": "failover", 00:28:35.024 "allow_unrecognized_csi": false, 00:28:35.024 "method": "bdev_nvme_attach_controller", 00:28:35.024 "req_id": 1 00:28:35.024 } 00:28:35.024 Got JSON-RPC error response 00:28:35.024 response: 00:28:35.024 { 00:28:35.024 "code": -114, 00:28:35.024 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:35.024 } 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:35.024 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.025 12:11:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.025 NVMe0n1 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.025 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.285 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.285 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:35.286 12:12:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:36.669 { 00:28:36.669 "results": [ 00:28:36.669 { 00:28:36.669 "job": "NVMe0n1", 00:28:36.669 "core_mask": "0x1", 00:28:36.669 "workload": "write", 00:28:36.669 "status": "finished", 00:28:36.669 "queue_depth": 128, 00:28:36.669 "io_size": 4096, 00:28:36.669 "runtime": 1.007197, 00:28:36.669 "iops": 28778.878412068345, 00:28:36.669 "mibps": 112.41749379714197, 00:28:36.669 "io_failed": 0, 00:28:36.669 "io_timeout": 0, 00:28:36.669 "avg_latency_us": 4437.821086961521, 00:28:36.669 "min_latency_us": 2143.5733333333333, 00:28:36.669 "max_latency_us": 12288.0 00:28:36.669 } 00:28:36.669 ], 00:28:36.669 "core_count": 1 00:28:36.669 } 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1444221 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1444221 ']' 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1444221 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1444221 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1444221' 00:28:36.669 killing process with pid 1444221 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1444221 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1444221 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:28:36.669 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:36.669 [2024-12-05 12:11:58.843197] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:28:36.669 [2024-12-05 12:11:58.843273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1444221 ] 00:28:36.669 [2024-12-05 12:11:58.939421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:36.669 [2024-12-05 12:11:58.992787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:36.669 [2024-12-05 12:12:00.280917] bdev.c:4934:bdev_name_add: *ERROR*: Bdev name e74049da-c2dc-41f2-9c68-803a6fcbe7a7 already exists 00:28:36.669 [2024-12-05 12:12:00.280964] bdev.c:8154:bdev_register: *ERROR*: Unable to add uuid:e74049da-c2dc-41f2-9c68-803a6fcbe7a7 alias for bdev NVMe1n1 00:28:36.669 [2024-12-05 12:12:00.280976] bdev_nvme.c:4665:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:36.669 Running I/O for 1 seconds... 00:28:36.669 28764.00 IOPS, 112.36 MiB/s 00:28:36.669 Latency(us) 00:28:36.669 [2024-12-05T11:12:01.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.669 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:36.669 NVMe0n1 : 1.01 28778.88 112.42 0.00 0.00 4437.82 2143.57 12288.00 00:28:36.669 [2024-12-05T11:12:01.718Z] =================================================================================================================== 00:28:36.669 [2024-12-05T11:12:01.718Z] Total : 28778.88 112.42 0.00 0.00 4437.82 2143.57 12288.00 00:28:36.669 Received shutdown signal, test time was about 1.000000 seconds 00:28:36.669 00:28:36.669 Latency(us) 00:28:36.669 [2024-12-05T11:12:01.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.669 [2024-12-05T11:12:01.718Z] =================================================================================================================== 00:28:36.669 [2024-12-05T11:12:01.718Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.669 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:36.669 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:36.669 rmmod nvme_tcp 00:28:36.669 rmmod nvme_fabrics 00:28:36.931 rmmod nvme_keyring 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 1444031 ']' 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 1444031 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1444031 ']' 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1444031 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1444031 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1444031' 00:28:36.931 killing process with pid 1444031 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1444031 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1444031 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@254 -- # local dev 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:36.931 12:12:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # return 0 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@274 -- # iptr 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-save 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-restore 00:28:39.473 00:28:39.473 real 0m14.265s 00:28:39.473 user 0m17.552s 00:28:39.473 sys 0m6.626s 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:39.473 ************************************ 00:28:39.473 END TEST nvmf_multicontroller 00:28:39.473 ************************************ 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.473 ************************************ 00:28:39.473 START TEST nvmf_aer 00:28:39.473 ************************************ 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:39.473 * Looking for test storage... 00:28:39.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:28:39.473 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:39.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.474 --rc genhtml_branch_coverage=1 00:28:39.474 --rc genhtml_function_coverage=1 00:28:39.474 --rc genhtml_legend=1 00:28:39.474 --rc geninfo_all_blocks=1 00:28:39.474 --rc geninfo_unexecuted_blocks=1 00:28:39.474 00:28:39.474 ' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:39.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.474 --rc genhtml_branch_coverage=1 00:28:39.474 --rc genhtml_function_coverage=1 00:28:39.474 --rc genhtml_legend=1 00:28:39.474 --rc geninfo_all_blocks=1 00:28:39.474 --rc geninfo_unexecuted_blocks=1 00:28:39.474 00:28:39.474 ' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:39.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.474 --rc genhtml_branch_coverage=1 00:28:39.474 --rc genhtml_function_coverage=1 00:28:39.474 --rc genhtml_legend=1 00:28:39.474 --rc geninfo_all_blocks=1 00:28:39.474 --rc geninfo_unexecuted_blocks=1 00:28:39.474 00:28:39.474 ' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:39.474 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:39.474 --rc genhtml_branch_coverage=1 00:28:39.474 --rc genhtml_function_coverage=1 00:28:39.474 --rc genhtml_legend=1 00:28:39.474 --rc geninfo_all_blocks=1 00:28:39.474 --rc geninfo_unexecuted_blocks=1 00:28:39.474 00:28:39.474 ' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:39.474 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:28:39.474 12:12:04 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:47.768 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:47.768 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:47.769 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:47.769 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:47.769 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@247 -- # create_target_ns 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:47.769 10.0.0.1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:47.769 10.0.0.2 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:47.769 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:47.770 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:47.770 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.698 ms 00:28:47.770 00:28:47.770 --- 10.0.0.1 ping statistics --- 00:28:47.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.770 rtt min/avg/max/mdev = 0.698/0.698/0.698/0.000 ms 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:47.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:47.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:28:47.770 00:28:47.770 --- 10.0.0.2 ping statistics --- 00:28:47.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:47.770 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.770 12:12:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:47.770 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:47.770 ' 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=1449634 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 1449634 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1449634 ']' 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:47.771 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:47.771 [2024-12-05 12:12:12.136327] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:28:47.771 [2024-12-05 12:12:12.136392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.771 [2024-12-05 12:12:12.234840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.771 [2024-12-05 12:12:12.290320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.771 [2024-12-05 12:12:12.290372] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.771 [2024-12-05 12:12:12.290381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.771 [2024-12-05 12:12:12.290388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.771 [2024-12-05 12:12:12.290394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.771 [2024-12-05 12:12:12.292508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.771 [2024-12-05 12:12:12.292615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:47.771 [2024-12-05 12:12:12.292776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:47.771 [2024-12-05 12:12:12.292777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.033 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:48.033 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:28:48.033 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:48.033 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:48.033 12:12:12 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.033 [2024-12-05 12:12:13.015004] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.033 Malloc0 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.033 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.297 [2024-12-05 12:12:13.088185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:48.297 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.297 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:48.297 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.297 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.297 [ 00:28:48.297 { 00:28:48.297 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:48.297 "subtype": "Discovery", 00:28:48.297 "listen_addresses": [], 00:28:48.297 "allow_any_host": true, 00:28:48.297 "hosts": [] 00:28:48.297 }, 00:28:48.298 { 00:28:48.298 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.298 "subtype": "NVMe", 00:28:48.298 "listen_addresses": [ 00:28:48.298 { 00:28:48.298 "trtype": "TCP", 00:28:48.298 "adrfam": "IPv4", 00:28:48.298 "traddr": "10.0.0.2", 00:28:48.298 "trsvcid": "4420" 00:28:48.298 } 00:28:48.298 ], 00:28:48.298 "allow_any_host": true, 00:28:48.298 "hosts": [], 00:28:48.298 "serial_number": "SPDK00000000000001", 00:28:48.298 "model_number": "SPDK bdev Controller", 00:28:48.298 "max_namespaces": 2, 00:28:48.298 "min_cntlid": 1, 00:28:48.298 "max_cntlid": 65519, 00:28:48.298 "namespaces": [ 00:28:48.298 { 00:28:48.298 "nsid": 1, 00:28:48.298 "bdev_name": "Malloc0", 00:28:48.298 "name": "Malloc0", 00:28:48.298 "nguid": "8F66B26A89174687B96D53B662EB1615", 00:28:48.298 "uuid": "8f66b26a-8917-4687-b96d-53b662eb1615" 00:28:48.298 } 00:28:48.298 ] 00:28:48.298 } 00:28:48.298 ] 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1449858 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.298 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.559 Malloc1 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.559 Asynchronous Event Request test 00:28:48.559 Attaching to 10.0.0.2 00:28:48.559 Attached to 10.0.0.2 00:28:48.559 Registering asynchronous event callbacks... 00:28:48.559 Starting namespace attribute notice tests for all controllers... 00:28:48.559 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:48.559 aer_cb - Changed Namespace 00:28:48.559 Cleaning up... 00:28:48.559 [ 00:28:48.559 { 00:28:48.559 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:48.559 "subtype": "Discovery", 00:28:48.559 "listen_addresses": [], 00:28:48.559 "allow_any_host": true, 00:28:48.559 "hosts": [] 00:28:48.559 }, 00:28:48.559 { 00:28:48.559 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:48.559 "subtype": "NVMe", 00:28:48.559 "listen_addresses": [ 00:28:48.559 { 00:28:48.559 "trtype": "TCP", 00:28:48.559 "adrfam": "IPv4", 00:28:48.559 "traddr": "10.0.0.2", 00:28:48.559 "trsvcid": "4420" 00:28:48.559 } 00:28:48.559 ], 00:28:48.559 "allow_any_host": true, 00:28:48.559 "hosts": [], 00:28:48.559 "serial_number": "SPDK00000000000001", 00:28:48.559 "model_number": "SPDK bdev Controller", 00:28:48.559 "max_namespaces": 2, 00:28:48.559 "min_cntlid": 1, 00:28:48.559 "max_cntlid": 65519, 00:28:48.559 "namespaces": [ 00:28:48.559 { 00:28:48.559 "nsid": 1, 00:28:48.559 "bdev_name": "Malloc0", 00:28:48.559 "name": "Malloc0", 00:28:48.559 "nguid": "8F66B26A89174687B96D53B662EB1615", 00:28:48.559 "uuid": "8f66b26a-8917-4687-b96d-53b662eb1615" 00:28:48.559 }, 00:28:48.559 { 00:28:48.559 "nsid": 2, 00:28:48.559 "bdev_name": "Malloc1", 00:28:48.559 "name": "Malloc1", 00:28:48.559 "nguid": "76BC4B044B6C4D8A911CA7D359C9FF9C", 00:28:48.559 "uuid": "76bc4b04-4b6c-4d8a-911c-a7d359c9ff9c" 00:28:48.559 } 00:28:48.559 ] 00:28:48.559 } 00:28:48.559 ] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1449858 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:48.559 rmmod nvme_tcp 00:28:48.559 rmmod nvme_fabrics 00:28:48.559 rmmod nvme_keyring 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 1449634 ']' 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 1449634 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1449634 ']' 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1449634 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:48.559 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1449634 00:28:48.820 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:48.820 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:48.820 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1449634' 00:28:48.821 killing process with pid 1449634 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1449634 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1449634 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@254 -- # local dev 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:48.821 12:12:13 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # return 0 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@274 -- # iptr 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-save 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-restore 00:28:51.371 00:28:51.371 real 0m11.729s 00:28:51.371 user 0m8.289s 00:28:51.371 sys 0m6.259s 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:51.371 ************************************ 00:28:51.371 END TEST nvmf_aer 00:28:51.371 ************************************ 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:51.371 12:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.371 ************************************ 00:28:51.371 START TEST nvmf_async_init 00:28:51.371 ************************************ 00:28:51.372 12:12:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:51.372 * Looking for test storage... 00:28:51.372 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:51.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.372 --rc genhtml_branch_coverage=1 00:28:51.372 --rc genhtml_function_coverage=1 00:28:51.372 --rc genhtml_legend=1 00:28:51.372 --rc geninfo_all_blocks=1 00:28:51.372 --rc geninfo_unexecuted_blocks=1 00:28:51.372 00:28:51.372 ' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:51.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.372 --rc genhtml_branch_coverage=1 00:28:51.372 --rc genhtml_function_coverage=1 00:28:51.372 --rc genhtml_legend=1 00:28:51.372 --rc geninfo_all_blocks=1 00:28:51.372 --rc geninfo_unexecuted_blocks=1 00:28:51.372 00:28:51.372 ' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:51.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.372 --rc genhtml_branch_coverage=1 00:28:51.372 --rc genhtml_function_coverage=1 00:28:51.372 --rc genhtml_legend=1 00:28:51.372 --rc geninfo_all_blocks=1 00:28:51.372 --rc geninfo_unexecuted_blocks=1 00:28:51.372 00:28:51.372 ' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:51.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:51.372 --rc genhtml_branch_coverage=1 00:28:51.372 --rc genhtml_function_coverage=1 00:28:51.372 --rc genhtml_legend=1 00:28:51.372 --rc geninfo_all_blocks=1 00:28:51.372 --rc geninfo_unexecuted_blocks=1 00:28:51.372 00:28:51.372 ' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:51.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:51.372 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=d17b22f555db45b2bb4a9c81dc2833c0 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:28:51.373 12:12:16 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:59.532 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:59.533 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:59.533 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:59.533 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:59.533 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@247 -- # create_target_ns 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:59.533 10.0.0.1 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:59.533 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:59.534 10.0.0.2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:59.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:59.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.600 ms 00:28:59.534 00:28:59.534 --- 10.0.0.1 ping statistics --- 00:28:59.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.534 rtt min/avg/max/mdev = 0.600/0.600/0.600/0.000 ms 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:59.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:59.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:28:59.534 00:28:59.534 --- 10.0.0.2 ping statistics --- 00:28:59.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:59.534 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:59.534 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:28:59.535 ' 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=1454204 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 1454204 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1454204 ']' 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:59.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:59.535 12:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.535 [2024-12-05 12:12:23.931988] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:28:59.535 [2024-12-05 12:12:23.932052] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:59.535 [2024-12-05 12:12:24.030687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:59.535 [2024-12-05 12:12:24.082212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:59.535 [2024-12-05 12:12:24.082260] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:59.535 [2024-12-05 12:12:24.082269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:59.535 [2024-12-05 12:12:24.082277] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:59.535 [2024-12-05 12:12:24.082289] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:59.535 [2024-12-05 12:12:24.083038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.795 [2024-12-05 12:12:24.798337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:59.795 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.796 null0 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g d17b22f555db45b2bb4a9c81dc2833c0 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.796 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.056 [2024-12-05 12:12:24.858740] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.056 12:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.056 nvme0n1 00:29:00.056 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.056 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:00.056 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.056 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 [ 00:29:00.330 { 00:29:00.330 "name": "nvme0n1", 00:29:00.330 "aliases": [ 00:29:00.330 "d17b22f5-55db-45b2-bb4a-9c81dc2833c0" 00:29:00.330 ], 00:29:00.330 "product_name": "NVMe disk", 00:29:00.330 "block_size": 512, 00:29:00.330 "num_blocks": 2097152, 00:29:00.330 "uuid": "d17b22f5-55db-45b2-bb4a-9c81dc2833c0", 00:29:00.330 "numa_id": 0, 00:29:00.330 "assigned_rate_limits": { 00:29:00.330 "rw_ios_per_sec": 0, 00:29:00.330 "rw_mbytes_per_sec": 0, 00:29:00.330 "r_mbytes_per_sec": 0, 00:29:00.330 "w_mbytes_per_sec": 0 00:29:00.330 }, 00:29:00.330 "claimed": false, 00:29:00.330 "zoned": false, 00:29:00.330 "supported_io_types": { 00:29:00.330 "read": true, 00:29:00.330 "write": true, 00:29:00.330 "unmap": false, 00:29:00.330 "flush": true, 00:29:00.330 "reset": true, 00:29:00.330 "nvme_admin": true, 00:29:00.330 "nvme_io": true, 00:29:00.330 "nvme_io_md": false, 00:29:00.330 "write_zeroes": true, 00:29:00.330 "zcopy": false, 00:29:00.330 "get_zone_info": false, 00:29:00.330 "zone_management": false, 00:29:00.330 "zone_append": false, 00:29:00.330 "compare": true, 00:29:00.330 "compare_and_write": true, 00:29:00.330 "abort": true, 00:29:00.330 "seek_hole": false, 00:29:00.330 "seek_data": false, 00:29:00.330 "copy": true, 00:29:00.330 "nvme_iov_md": false 00:29:00.330 }, 00:29:00.330 "memory_domains": [ 00:29:00.330 { 00:29:00.330 "dma_device_id": "system", 00:29:00.330 "dma_device_type": 1 00:29:00.330 } 00:29:00.330 ], 00:29:00.330 "driver_specific": { 00:29:00.330 "nvme": [ 00:29:00.330 { 00:29:00.330 "trid": { 00:29:00.330 "trtype": "TCP", 00:29:00.330 "adrfam": "IPv4", 00:29:00.330 "traddr": "10.0.0.2", 00:29:00.330 "trsvcid": "4420", 00:29:00.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:00.330 }, 00:29:00.330 "ctrlr_data": { 00:29:00.330 "cntlid": 1, 00:29:00.330 "vendor_id": "0x8086", 00:29:00.330 "model_number": "SPDK bdev Controller", 00:29:00.330 "serial_number": "00000000000000000000", 00:29:00.330 "firmware_revision": "25.01", 00:29:00.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.330 "oacs": { 00:29:00.330 "security": 0, 00:29:00.330 "format": 0, 00:29:00.330 "firmware": 0, 00:29:00.330 "ns_manage": 0 00:29:00.330 }, 00:29:00.330 "multi_ctrlr": true, 00:29:00.330 "ana_reporting": false 00:29:00.330 }, 00:29:00.330 "vs": { 00:29:00.330 "nvme_version": "1.3" 00:29:00.330 }, 00:29:00.330 "ns_data": { 00:29:00.330 "id": 1, 00:29:00.330 "can_share": true 00:29:00.330 } 00:29:00.330 } 00:29:00.330 ], 00:29:00.330 "mp_policy": "active_passive" 00:29:00.330 } 00:29:00.330 } 00:29:00.330 ] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 [2024-12-05 12:12:25.135223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:00.330 [2024-12-05 12:12:25.135307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x126ef50 (9): Bad file descriptor 00:29:00.330 [2024-12-05 12:12:25.267563] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 [ 00:29:00.330 { 00:29:00.330 "name": "nvme0n1", 00:29:00.330 "aliases": [ 00:29:00.330 "d17b22f5-55db-45b2-bb4a-9c81dc2833c0" 00:29:00.330 ], 00:29:00.330 "product_name": "NVMe disk", 00:29:00.330 "block_size": 512, 00:29:00.330 "num_blocks": 2097152, 00:29:00.330 "uuid": "d17b22f5-55db-45b2-bb4a-9c81dc2833c0", 00:29:00.330 "numa_id": 0, 00:29:00.330 "assigned_rate_limits": { 00:29:00.330 "rw_ios_per_sec": 0, 00:29:00.330 "rw_mbytes_per_sec": 0, 00:29:00.330 "r_mbytes_per_sec": 0, 00:29:00.330 "w_mbytes_per_sec": 0 00:29:00.330 }, 00:29:00.330 "claimed": false, 00:29:00.330 "zoned": false, 00:29:00.330 "supported_io_types": { 00:29:00.330 "read": true, 00:29:00.330 "write": true, 00:29:00.330 "unmap": false, 00:29:00.330 "flush": true, 00:29:00.330 "reset": true, 00:29:00.330 "nvme_admin": true, 00:29:00.330 "nvme_io": true, 00:29:00.330 "nvme_io_md": false, 00:29:00.330 "write_zeroes": true, 00:29:00.330 "zcopy": false, 00:29:00.330 "get_zone_info": false, 00:29:00.330 "zone_management": false, 00:29:00.330 "zone_append": false, 00:29:00.330 "compare": true, 00:29:00.330 "compare_and_write": true, 00:29:00.330 "abort": true, 00:29:00.330 "seek_hole": false, 00:29:00.330 "seek_data": false, 00:29:00.330 "copy": true, 00:29:00.330 "nvme_iov_md": false 00:29:00.330 }, 00:29:00.330 "memory_domains": [ 00:29:00.330 { 00:29:00.330 "dma_device_id": "system", 00:29:00.330 "dma_device_type": 1 00:29:00.330 } 00:29:00.330 ], 00:29:00.330 "driver_specific": { 00:29:00.330 "nvme": [ 00:29:00.330 { 00:29:00.330 "trid": { 00:29:00.330 "trtype": "TCP", 00:29:00.330 "adrfam": "IPv4", 00:29:00.330 "traddr": "10.0.0.2", 00:29:00.330 "trsvcid": "4420", 00:29:00.330 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:00.330 }, 00:29:00.330 "ctrlr_data": { 00:29:00.330 "cntlid": 2, 00:29:00.330 "vendor_id": "0x8086", 00:29:00.330 "model_number": "SPDK bdev Controller", 00:29:00.330 "serial_number": "00000000000000000000", 00:29:00.330 "firmware_revision": "25.01", 00:29:00.330 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.330 "oacs": { 00:29:00.330 "security": 0, 00:29:00.330 "format": 0, 00:29:00.330 "firmware": 0, 00:29:00.330 "ns_manage": 0 00:29:00.330 }, 00:29:00.330 "multi_ctrlr": true, 00:29:00.330 "ana_reporting": false 00:29:00.330 }, 00:29:00.330 "vs": { 00:29:00.330 "nvme_version": "1.3" 00:29:00.330 }, 00:29:00.330 "ns_data": { 00:29:00.330 "id": 1, 00:29:00.330 "can_share": true 00:29:00.330 } 00:29:00.330 } 00:29:00.330 ], 00:29:00.330 "mp_policy": "active_passive" 00:29:00.330 } 00:29:00.330 } 00:29:00.330 ] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.fmNow5Yg9b 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.fmNow5Yg9b 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.fmNow5Yg9b 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.330 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.330 [2024-12-05 12:12:25.355939] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:00.330 [2024-12-05 12:12:25.356109] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:00.331 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.331 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:00.331 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.331 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.603 [2024-12-05 12:12:25.380017] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:00.603 nvme0n1 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.603 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.603 [ 00:29:00.603 { 00:29:00.603 "name": "nvme0n1", 00:29:00.603 "aliases": [ 00:29:00.603 "d17b22f5-55db-45b2-bb4a-9c81dc2833c0" 00:29:00.603 ], 00:29:00.603 "product_name": "NVMe disk", 00:29:00.603 "block_size": 512, 00:29:00.603 "num_blocks": 2097152, 00:29:00.603 "uuid": "d17b22f5-55db-45b2-bb4a-9c81dc2833c0", 00:29:00.603 "numa_id": 0, 00:29:00.603 "assigned_rate_limits": { 00:29:00.603 "rw_ios_per_sec": 0, 00:29:00.604 "rw_mbytes_per_sec": 0, 00:29:00.604 "r_mbytes_per_sec": 0, 00:29:00.604 "w_mbytes_per_sec": 0 00:29:00.604 }, 00:29:00.604 "claimed": false, 00:29:00.604 "zoned": false, 00:29:00.604 "supported_io_types": { 00:29:00.604 "read": true, 00:29:00.604 "write": true, 00:29:00.604 "unmap": false, 00:29:00.604 "flush": true, 00:29:00.604 "reset": true, 00:29:00.604 "nvme_admin": true, 00:29:00.604 "nvme_io": true, 00:29:00.604 "nvme_io_md": false, 00:29:00.604 "write_zeroes": true, 00:29:00.604 "zcopy": false, 00:29:00.604 "get_zone_info": false, 00:29:00.604 "zone_management": false, 00:29:00.604 "zone_append": false, 00:29:00.604 "compare": true, 00:29:00.604 "compare_and_write": true, 00:29:00.604 "abort": true, 00:29:00.604 "seek_hole": false, 00:29:00.604 "seek_data": false, 00:29:00.604 "copy": true, 00:29:00.604 "nvme_iov_md": false 00:29:00.604 }, 00:29:00.604 "memory_domains": [ 00:29:00.604 { 00:29:00.604 "dma_device_id": "system", 00:29:00.604 "dma_device_type": 1 00:29:00.604 } 00:29:00.604 ], 00:29:00.604 "driver_specific": { 00:29:00.604 "nvme": [ 00:29:00.604 { 00:29:00.604 "trid": { 00:29:00.604 "trtype": "TCP", 00:29:00.604 "adrfam": "IPv4", 00:29:00.604 "traddr": "10.0.0.2", 00:29:00.604 "trsvcid": "4421", 00:29:00.604 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:00.604 }, 00:29:00.604 "ctrlr_data": { 00:29:00.604 "cntlid": 3, 00:29:00.604 "vendor_id": "0x8086", 00:29:00.604 "model_number": "SPDK bdev Controller", 00:29:00.604 "serial_number": "00000000000000000000", 00:29:00.604 "firmware_revision": "25.01", 00:29:00.604 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:00.604 "oacs": { 00:29:00.604 "security": 0, 00:29:00.604 "format": 0, 00:29:00.604 "firmware": 0, 00:29:00.604 "ns_manage": 0 00:29:00.604 }, 00:29:00.604 "multi_ctrlr": true, 00:29:00.604 "ana_reporting": false 00:29:00.604 }, 00:29:00.604 "vs": { 00:29:00.604 "nvme_version": "1.3" 00:29:00.604 }, 00:29:00.604 "ns_data": { 00:29:00.604 "id": 1, 00:29:00.604 "can_share": true 00:29:00.604 } 00:29:00.604 } 00:29:00.604 ], 00:29:00.604 "mp_policy": "active_passive" 00:29:00.604 } 00:29:00.604 } 00:29:00.604 ] 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.fmNow5Yg9b 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:00.604 rmmod nvme_tcp 00:29:00.604 rmmod nvme_fabrics 00:29:00.604 rmmod nvme_keyring 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 1454204 ']' 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 1454204 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1454204 ']' 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1454204 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1454204 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1454204' 00:29:00.604 killing process with pid 1454204 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1454204 00:29:00.604 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1454204 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@254 -- # local dev 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:00.864 12:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # return 0 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@274 -- # iptr 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-save 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-restore 00:29:03.412 00:29:03.412 real 0m11.919s 00:29:03.412 user 0m4.378s 00:29:03.412 sys 0m6.112s 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:03.412 ************************************ 00:29:03.412 END TEST nvmf_async_init 00:29:03.412 ************************************ 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.412 ************************************ 00:29:03.412 START TEST dma 00:29:03.412 ************************************ 00:29:03.412 12:12:27 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:03.412 * Looking for test storage... 00:29:03.412 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:03.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.412 --rc genhtml_branch_coverage=1 00:29:03.412 --rc genhtml_function_coverage=1 00:29:03.412 --rc genhtml_legend=1 00:29:03.412 --rc geninfo_all_blocks=1 00:29:03.412 --rc geninfo_unexecuted_blocks=1 00:29:03.412 00:29:03.412 ' 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:03.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.412 --rc genhtml_branch_coverage=1 00:29:03.412 --rc genhtml_function_coverage=1 00:29:03.412 --rc genhtml_legend=1 00:29:03.412 --rc geninfo_all_blocks=1 00:29:03.412 --rc geninfo_unexecuted_blocks=1 00:29:03.412 00:29:03.412 ' 00:29:03.412 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:03.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.412 --rc genhtml_branch_coverage=1 00:29:03.412 --rc genhtml_function_coverage=1 00:29:03.412 --rc genhtml_legend=1 00:29:03.413 --rc geninfo_all_blocks=1 00:29:03.413 --rc geninfo_unexecuted_blocks=1 00:29:03.413 00:29:03.413 ' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:03.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.413 --rc genhtml_branch_coverage=1 00:29:03.413 --rc genhtml_function_coverage=1 00:29:03.413 --rc genhtml_legend=1 00:29:03.413 --rc geninfo_all_blocks=1 00:29:03.413 --rc geninfo_unexecuted_blocks=1 00:29:03.413 00:29:03.413 ' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@50 -- # : 0 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:03.413 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:03.413 00:29:03.413 real 0m0.239s 00:29:03.413 user 0m0.127s 00:29:03.413 sys 0m0.128s 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:03.413 ************************************ 00:29:03.413 END TEST dma 00:29:03.413 ************************************ 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:03.413 ************************************ 00:29:03.413 START TEST nvmf_identify 00:29:03.413 ************************************ 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:03.413 * Looking for test storage... 00:29:03.413 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:03.413 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.675 --rc genhtml_branch_coverage=1 00:29:03.675 --rc genhtml_function_coverage=1 00:29:03.675 --rc genhtml_legend=1 00:29:03.675 --rc geninfo_all_blocks=1 00:29:03.675 --rc geninfo_unexecuted_blocks=1 00:29:03.675 00:29:03.675 ' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.675 --rc genhtml_branch_coverage=1 00:29:03.675 --rc genhtml_function_coverage=1 00:29:03.675 --rc genhtml_legend=1 00:29:03.675 --rc geninfo_all_blocks=1 00:29:03.675 --rc geninfo_unexecuted_blocks=1 00:29:03.675 00:29:03.675 ' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.675 --rc genhtml_branch_coverage=1 00:29:03.675 --rc genhtml_function_coverage=1 00:29:03.675 --rc genhtml_legend=1 00:29:03.675 --rc geninfo_all_blocks=1 00:29:03.675 --rc geninfo_unexecuted_blocks=1 00:29:03.675 00:29:03.675 ' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:03.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:03.675 --rc genhtml_branch_coverage=1 00:29:03.675 --rc genhtml_function_coverage=1 00:29:03.675 --rc genhtml_legend=1 00:29:03.675 --rc geninfo_all_blocks=1 00:29:03.675 --rc geninfo_unexecuted_blocks=1 00:29:03.675 00:29:03.675 ' 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:03.675 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:03.676 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:29:03.676 12:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:11.816 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:11.816 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:11.816 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:11.816 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:11.817 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@247 -- # create_target_ns 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:11.817 10.0.0.1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:11.817 10.0.0.2 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:11.817 12:12:35 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:11.817 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:11.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:11.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.649 ms 00:29:11.818 00:29:11.818 --- 10.0.0.1 ping statistics --- 00:29:11.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.818 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:11.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:11.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:29:11.818 00:29:11.818 --- 10.0.0.2 ping statistics --- 00:29:11.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:11.818 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:11.818 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:11.819 ' 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1458954 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1458954 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1458954 ']' 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:11.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:11.819 12:12:36 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:11.819 [2024-12-05 12:12:36.279572] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:29:11.819 [2024-12-05 12:12:36.279636] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.819 [2024-12-05 12:12:36.380068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:11.819 [2024-12-05 12:12:36.433906] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.819 [2024-12-05 12:12:36.433958] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.819 [2024-12-05 12:12:36.433967] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.819 [2024-12-05 12:12:36.433974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.819 [2024-12-05 12:12:36.433980] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.819 [2024-12-05 12:12:36.436015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.819 [2024-12-05 12:12:36.436176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:11.819 [2024-12-05 12:12:36.436336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:11.819 [2024-12-05 12:12:36.436336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.080 [2024-12-05 12:12:37.105480] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:12.080 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 Malloc0 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 [2024-12-05 12:12:37.224210] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.343 [ 00:29:12.343 { 00:29:12.343 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:12.343 "subtype": "Discovery", 00:29:12.343 "listen_addresses": [ 00:29:12.343 { 00:29:12.343 "trtype": "TCP", 00:29:12.343 "adrfam": "IPv4", 00:29:12.343 "traddr": "10.0.0.2", 00:29:12.343 "trsvcid": "4420" 00:29:12.343 } 00:29:12.343 ], 00:29:12.343 "allow_any_host": true, 00:29:12.343 "hosts": [] 00:29:12.343 }, 00:29:12.343 { 00:29:12.343 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:12.343 "subtype": "NVMe", 00:29:12.343 "listen_addresses": [ 00:29:12.343 { 00:29:12.343 "trtype": "TCP", 00:29:12.343 "adrfam": "IPv4", 00:29:12.343 "traddr": "10.0.0.2", 00:29:12.343 "trsvcid": "4420" 00:29:12.343 } 00:29:12.343 ], 00:29:12.343 "allow_any_host": true, 00:29:12.343 "hosts": [], 00:29:12.343 "serial_number": "SPDK00000000000001", 00:29:12.343 "model_number": "SPDK bdev Controller", 00:29:12.343 "max_namespaces": 32, 00:29:12.343 "min_cntlid": 1, 00:29:12.343 "max_cntlid": 65519, 00:29:12.343 "namespaces": [ 00:29:12.343 { 00:29:12.343 "nsid": 1, 00:29:12.343 "bdev_name": "Malloc0", 00:29:12.343 "name": "Malloc0", 00:29:12.343 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:12.343 "eui64": "ABCDEF0123456789", 00:29:12.343 "uuid": "6b78359f-faf4-416b-9579-5e2bbe369368" 00:29:12.343 } 00:29:12.343 ] 00:29:12.343 } 00:29:12.343 ] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.343 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:12.343 [2024-12-05 12:12:37.288626] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:29:12.344 [2024-12-05 12:12:37.288669] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459029 ] 00:29:12.344 [2024-12-05 12:12:37.345144] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:12.344 [2024-12-05 12:12:37.345210] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:12.344 [2024-12-05 12:12:37.345216] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:12.344 [2024-12-05 12:12:37.345238] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:12.344 [2024-12-05 12:12:37.345249] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:12.344 [2024-12-05 12:12:37.348836] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:12.344 [2024-12-05 12:12:37.348886] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x728690 0 00:29:12.344 [2024-12-05 12:12:37.356471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:12.344 [2024-12-05 12:12:37.356488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:12.344 [2024-12-05 12:12:37.356493] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:12.344 [2024-12-05 12:12:37.356496] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:12.344 [2024-12-05 12:12:37.356543] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.356550] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.356554] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.344 [2024-12-05 12:12:37.356571] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:12.344 [2024-12-05 12:12:37.356594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.344 [2024-12-05 12:12:37.364468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.344 [2024-12-05 12:12:37.364480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.344 [2024-12-05 12:12:37.364484] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.364495] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.344 [2024-12-05 12:12:37.364511] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:12.344 [2024-12-05 12:12:37.364521] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:12.344 [2024-12-05 12:12:37.364527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:12.344 [2024-12-05 12:12:37.364545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.364549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.364553] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.344 [2024-12-05 12:12:37.364562] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.344 [2024-12-05 12:12:37.364578] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.344 [2024-12-05 12:12:37.364808] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.344 [2024-12-05 12:12:37.364814] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.344 [2024-12-05 12:12:37.364818] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.364822] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.344 [2024-12-05 12:12:37.364831] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:12.344 [2024-12-05 12:12:37.364839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:12.344 [2024-12-05 12:12:37.364846] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.364850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.364853] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.344 [2024-12-05 12:12:37.364860] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.344 [2024-12-05 12:12:37.364871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.344 [2024-12-05 12:12:37.365073] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.344 [2024-12-05 12:12:37.365079] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.344 [2024-12-05 12:12:37.365083] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.344 [2024-12-05 12:12:37.365093] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:12.344 [2024-12-05 12:12:37.365102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:12.344 [2024-12-05 12:12:37.365108] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.344 [2024-12-05 12:12:37.365123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.344 [2024-12-05 12:12:37.365133] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.344 [2024-12-05 12:12:37.365308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.344 [2024-12-05 12:12:37.365315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.344 [2024-12-05 12:12:37.365318] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.344 [2024-12-05 12:12:37.365334] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:12.344 [2024-12-05 12:12:37.365344] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365351] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.344 [2024-12-05 12:12:37.365358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.344 [2024-12-05 12:12:37.365369] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.344 [2024-12-05 12:12:37.365579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.344 [2024-12-05 12:12:37.365586] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.344 [2024-12-05 12:12:37.365589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365593] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.344 [2024-12-05 12:12:37.365599] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:12.344 [2024-12-05 12:12:37.365604] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:12.344 [2024-12-05 12:12:37.365612] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:12.344 [2024-12-05 12:12:37.365723] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:12.344 [2024-12-05 12:12:37.365728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:12.344 [2024-12-05 12:12:37.365739] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.344 [2024-12-05 12:12:37.365746] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.344 [2024-12-05 12:12:37.365753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.344 [2024-12-05 12:12:37.365764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.345 [2024-12-05 12:12:37.365976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.345 [2024-12-05 12:12:37.365982] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.345 [2024-12-05 12:12:37.365986] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.365989] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.345 [2024-12-05 12:12:37.365995] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:12.345 [2024-12-05 12:12:37.366005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.366019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.345 [2024-12-05 12:12:37.366030] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.345 [2024-12-05 12:12:37.366211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.345 [2024-12-05 12:12:37.366219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.345 [2024-12-05 12:12:37.366223] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.345 [2024-12-05 12:12:37.366232] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:12.345 [2024-12-05 12:12:37.366237] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:12.345 [2024-12-05 12:12:37.366245] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:12.345 [2024-12-05 12:12:37.366253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:12.345 [2024-12-05 12:12:37.366263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.366274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.345 [2024-12-05 12:12:37.366284] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.345 [2024-12-05 12:12:37.366521] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.345 [2024-12-05 12:12:37.366528] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.345 [2024-12-05 12:12:37.366532] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366536] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x728690): datao=0, datal=4096, cccid=0 00:29:12.345 [2024-12-05 12:12:37.366541] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x78a100) on tqpair(0x728690): expected_datao=0, payload_size=4096 00:29:12.345 [2024-12-05 12:12:37.366546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366555] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366560] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366691] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.345 [2024-12-05 12:12:37.366698] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.345 [2024-12-05 12:12:37.366701] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.345 [2024-12-05 12:12:37.366714] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:12.345 [2024-12-05 12:12:37.366719] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:12.345 [2024-12-05 12:12:37.366724] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:12.345 [2024-12-05 12:12:37.366730] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:12.345 [2024-12-05 12:12:37.366735] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:12.345 [2024-12-05 12:12:37.366741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:12.345 [2024-12-05 12:12:37.366750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:12.345 [2024-12-05 12:12:37.366758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.366768] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.366776] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:12.345 [2024-12-05 12:12:37.366787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.345 [2024-12-05 12:12:37.367010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.345 [2024-12-05 12:12:37.367017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.345 [2024-12-05 12:12:37.367020] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367024] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.345 [2024-12-05 12:12:37.367034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367042] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.367048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.345 [2024-12-05 12:12:37.367054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.367068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.345 [2024-12-05 12:12:37.367074] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367078] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367081] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.367087] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.345 [2024-12-05 12:12:37.367093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.367106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.345 [2024-12-05 12:12:37.367111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:12.345 [2024-12-05 12:12:37.367123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:12.345 [2024-12-05 12:12:37.367130] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.345 [2024-12-05 12:12:37.367133] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x728690) 00:29:12.345 [2024-12-05 12:12:37.367140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.345 [2024-12-05 12:12:37.367153] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a100, cid 0, qid 0 00:29:12.345 [2024-12-05 12:12:37.367158] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a280, cid 1, qid 0 00:29:12.345 [2024-12-05 12:12:37.367163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a400, cid 2, qid 0 00:29:12.345 [2024-12-05 12:12:37.367167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.345 [2024-12-05 12:12:37.367172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a700, cid 4, qid 0 00:29:12.345 [2024-12-05 12:12:37.367427] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.346 [2024-12-05 12:12:37.367434] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.346 [2024-12-05 12:12:37.367438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.346 [2024-12-05 12:12:37.367441] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a700) on tqpair=0x728690 00:29:12.346 [2024-12-05 12:12:37.367447] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:12.346 [2024-12-05 12:12:37.367453] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:12.346 [2024-12-05 12:12:37.367470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.346 [2024-12-05 12:12:37.367474] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x728690) 00:29:12.346 [2024-12-05 12:12:37.367480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.346 [2024-12-05 12:12:37.367491] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a700, cid 4, qid 0 00:29:12.346 [2024-12-05 12:12:37.367710] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.346 [2024-12-05 12:12:37.367716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.346 [2024-12-05 12:12:37.367720] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.346 [2024-12-05 12:12:37.367724] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x728690): datao=0, datal=4096, cccid=4 00:29:12.346 [2024-12-05 12:12:37.367728] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x78a700) on tqpair(0x728690): expected_datao=0, payload_size=4096 00:29:12.346 [2024-12-05 12:12:37.367733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.346 [2024-12-05 12:12:37.367746] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.346 [2024-12-05 12:12:37.367750] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.616 [2024-12-05 12:12:37.411478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.616 [2024-12-05 12:12:37.411482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a700) on tqpair=0x728690 00:29:12.616 [2024-12-05 12:12:37.411503] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:12.616 [2024-12-05 12:12:37.411535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411540] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x728690) 00:29:12.616 [2024-12-05 12:12:37.411548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.616 [2024-12-05 12:12:37.411555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x728690) 00:29:12.616 [2024-12-05 12:12:37.411569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.616 [2024-12-05 12:12:37.411587] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a700, cid 4, qid 0 00:29:12.616 [2024-12-05 12:12:37.411593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a880, cid 5, qid 0 00:29:12.616 [2024-12-05 12:12:37.411851] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.616 [2024-12-05 12:12:37.411858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.616 [2024-12-05 12:12:37.411861] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411869] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x728690): datao=0, datal=1024, cccid=4 00:29:12.616 [2024-12-05 12:12:37.411874] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x78a700) on tqpair(0x728690): expected_datao=0, payload_size=1024 00:29:12.616 [2024-12-05 12:12:37.411879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411886] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411889] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.616 [2024-12-05 12:12:37.411901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.616 [2024-12-05 12:12:37.411905] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.411908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a880) on tqpair=0x728690 00:29:12.616 [2024-12-05 12:12:37.453683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.616 [2024-12-05 12:12:37.453695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.616 [2024-12-05 12:12:37.453698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.453703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a700) on tqpair=0x728690 00:29:12.616 [2024-12-05 12:12:37.453716] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.453721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x728690) 00:29:12.616 [2024-12-05 12:12:37.453728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.616 [2024-12-05 12:12:37.453746] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a700, cid 4, qid 0 00:29:12.616 [2024-12-05 12:12:37.453960] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.616 [2024-12-05 12:12:37.453966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.616 [2024-12-05 12:12:37.453970] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.453975] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x728690): datao=0, datal=3072, cccid=4 00:29:12.616 [2024-12-05 12:12:37.453980] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x78a700) on tqpair(0x728690): expected_datao=0, payload_size=3072 00:29:12.616 [2024-12-05 12:12:37.453984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.453997] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.454002] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.454137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.616 [2024-12-05 12:12:37.454144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.616 [2024-12-05 12:12:37.454147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.454151] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a700) on tqpair=0x728690 00:29:12.616 [2024-12-05 12:12:37.454160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.454164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x728690) 00:29:12.616 [2024-12-05 12:12:37.454171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.616 [2024-12-05 12:12:37.454186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a700, cid 4, qid 0 00:29:12.616 [2024-12-05 12:12:37.454431] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.616 [2024-12-05 12:12:37.454438] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.616 [2024-12-05 12:12:37.454442] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.454445] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x728690): datao=0, datal=8, cccid=4 00:29:12.616 [2024-12-05 12:12:37.458466] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x78a700) on tqpair(0x728690): expected_datao=0, payload_size=8 00:29:12.616 [2024-12-05 12:12:37.458473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.458480] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.458484] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.498467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.616 [2024-12-05 12:12:37.498478] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.616 [2024-12-05 12:12:37.498482] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.616 [2024-12-05 12:12:37.498486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a700) on tqpair=0x728690 00:29:12.616 ===================================================== 00:29:12.616 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:12.616 ===================================================== 00:29:12.616 Controller Capabilities/Features 00:29:12.616 ================================ 00:29:12.616 Vendor ID: 0000 00:29:12.616 Subsystem Vendor ID: 0000 00:29:12.616 Serial Number: .................... 00:29:12.616 Model Number: ........................................ 00:29:12.616 Firmware Version: 25.01 00:29:12.616 Recommended Arb Burst: 0 00:29:12.616 IEEE OUI Identifier: 00 00 00 00:29:12.616 Multi-path I/O 00:29:12.616 May have multiple subsystem ports: No 00:29:12.616 May have multiple controllers: No 00:29:12.616 Associated with SR-IOV VF: No 00:29:12.616 Max Data Transfer Size: 131072 00:29:12.616 Max Number of Namespaces: 0 00:29:12.616 Max Number of I/O Queues: 1024 00:29:12.616 NVMe Specification Version (VS): 1.3 00:29:12.616 NVMe Specification Version (Identify): 1.3 00:29:12.616 Maximum Queue Entries: 128 00:29:12.616 Contiguous Queues Required: Yes 00:29:12.616 Arbitration Mechanisms Supported 00:29:12.616 Weighted Round Robin: Not Supported 00:29:12.616 Vendor Specific: Not Supported 00:29:12.616 Reset Timeout: 15000 ms 00:29:12.616 Doorbell Stride: 4 bytes 00:29:12.616 NVM Subsystem Reset: Not Supported 00:29:12.616 Command Sets Supported 00:29:12.616 NVM Command Set: Supported 00:29:12.616 Boot Partition: Not Supported 00:29:12.616 Memory Page Size Minimum: 4096 bytes 00:29:12.616 Memory Page Size Maximum: 4096 bytes 00:29:12.616 Persistent Memory Region: Not Supported 00:29:12.616 Optional Asynchronous Events Supported 00:29:12.616 Namespace Attribute Notices: Not Supported 00:29:12.616 Firmware Activation Notices: Not Supported 00:29:12.616 ANA Change Notices: Not Supported 00:29:12.616 PLE Aggregate Log Change Notices: Not Supported 00:29:12.616 LBA Status Info Alert Notices: Not Supported 00:29:12.616 EGE Aggregate Log Change Notices: Not Supported 00:29:12.616 Normal NVM Subsystem Shutdown event: Not Supported 00:29:12.616 Zone Descriptor Change Notices: Not Supported 00:29:12.616 Discovery Log Change Notices: Supported 00:29:12.616 Controller Attributes 00:29:12.616 128-bit Host Identifier: Not Supported 00:29:12.616 Non-Operational Permissive Mode: Not Supported 00:29:12.617 NVM Sets: Not Supported 00:29:12.617 Read Recovery Levels: Not Supported 00:29:12.617 Endurance Groups: Not Supported 00:29:12.617 Predictable Latency Mode: Not Supported 00:29:12.617 Traffic Based Keep ALive: Not Supported 00:29:12.617 Namespace Granularity: Not Supported 00:29:12.617 SQ Associations: Not Supported 00:29:12.617 UUID List: Not Supported 00:29:12.617 Multi-Domain Subsystem: Not Supported 00:29:12.617 Fixed Capacity Management: Not Supported 00:29:12.617 Variable Capacity Management: Not Supported 00:29:12.617 Delete Endurance Group: Not Supported 00:29:12.617 Delete NVM Set: Not Supported 00:29:12.617 Extended LBA Formats Supported: Not Supported 00:29:12.617 Flexible Data Placement Supported: Not Supported 00:29:12.617 00:29:12.617 Controller Memory Buffer Support 00:29:12.617 ================================ 00:29:12.617 Supported: No 00:29:12.617 00:29:12.617 Persistent Memory Region Support 00:29:12.617 ================================ 00:29:12.617 Supported: No 00:29:12.617 00:29:12.617 Admin Command Set Attributes 00:29:12.617 ============================ 00:29:12.617 Security Send/Receive: Not Supported 00:29:12.617 Format NVM: Not Supported 00:29:12.617 Firmware Activate/Download: Not Supported 00:29:12.617 Namespace Management: Not Supported 00:29:12.617 Device Self-Test: Not Supported 00:29:12.617 Directives: Not Supported 00:29:12.617 NVMe-MI: Not Supported 00:29:12.617 Virtualization Management: Not Supported 00:29:12.617 Doorbell Buffer Config: Not Supported 00:29:12.617 Get LBA Status Capability: Not Supported 00:29:12.617 Command & Feature Lockdown Capability: Not Supported 00:29:12.617 Abort Command Limit: 1 00:29:12.617 Async Event Request Limit: 4 00:29:12.617 Number of Firmware Slots: N/A 00:29:12.617 Firmware Slot 1 Read-Only: N/A 00:29:12.617 Firmware Activation Without Reset: N/A 00:29:12.617 Multiple Update Detection Support: N/A 00:29:12.617 Firmware Update Granularity: No Information Provided 00:29:12.617 Per-Namespace SMART Log: No 00:29:12.617 Asymmetric Namespace Access Log Page: Not Supported 00:29:12.617 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:12.617 Command Effects Log Page: Not Supported 00:29:12.617 Get Log Page Extended Data: Supported 00:29:12.617 Telemetry Log Pages: Not Supported 00:29:12.617 Persistent Event Log Pages: Not Supported 00:29:12.617 Supported Log Pages Log Page: May Support 00:29:12.617 Commands Supported & Effects Log Page: Not Supported 00:29:12.617 Feature Identifiers & Effects Log Page:May Support 00:29:12.617 NVMe-MI Commands & Effects Log Page: May Support 00:29:12.617 Data Area 4 for Telemetry Log: Not Supported 00:29:12.617 Error Log Page Entries Supported: 128 00:29:12.617 Keep Alive: Not Supported 00:29:12.617 00:29:12.617 NVM Command Set Attributes 00:29:12.617 ========================== 00:29:12.617 Submission Queue Entry Size 00:29:12.617 Max: 1 00:29:12.617 Min: 1 00:29:12.617 Completion Queue Entry Size 00:29:12.617 Max: 1 00:29:12.617 Min: 1 00:29:12.617 Number of Namespaces: 0 00:29:12.617 Compare Command: Not Supported 00:29:12.617 Write Uncorrectable Command: Not Supported 00:29:12.617 Dataset Management Command: Not Supported 00:29:12.617 Write Zeroes Command: Not Supported 00:29:12.617 Set Features Save Field: Not Supported 00:29:12.617 Reservations: Not Supported 00:29:12.617 Timestamp: Not Supported 00:29:12.617 Copy: Not Supported 00:29:12.617 Volatile Write Cache: Not Present 00:29:12.617 Atomic Write Unit (Normal): 1 00:29:12.617 Atomic Write Unit (PFail): 1 00:29:12.617 Atomic Compare & Write Unit: 1 00:29:12.617 Fused Compare & Write: Supported 00:29:12.617 Scatter-Gather List 00:29:12.617 SGL Command Set: Supported 00:29:12.617 SGL Keyed: Supported 00:29:12.617 SGL Bit Bucket Descriptor: Not Supported 00:29:12.617 SGL Metadata Pointer: Not Supported 00:29:12.617 Oversized SGL: Not Supported 00:29:12.617 SGL Metadata Address: Not Supported 00:29:12.617 SGL Offset: Supported 00:29:12.617 Transport SGL Data Block: Not Supported 00:29:12.617 Replay Protected Memory Block: Not Supported 00:29:12.617 00:29:12.617 Firmware Slot Information 00:29:12.617 ========================= 00:29:12.617 Active slot: 0 00:29:12.617 00:29:12.617 00:29:12.617 Error Log 00:29:12.617 ========= 00:29:12.617 00:29:12.617 Active Namespaces 00:29:12.617 ================= 00:29:12.617 Discovery Log Page 00:29:12.617 ================== 00:29:12.617 Generation Counter: 2 00:29:12.617 Number of Records: 2 00:29:12.617 Record Format: 0 00:29:12.617 00:29:12.617 Discovery Log Entry 0 00:29:12.617 ---------------------- 00:29:12.617 Transport Type: 3 (TCP) 00:29:12.617 Address Family: 1 (IPv4) 00:29:12.617 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:12.617 Entry Flags: 00:29:12.617 Duplicate Returned Information: 1 00:29:12.617 Explicit Persistent Connection Support for Discovery: 1 00:29:12.617 Transport Requirements: 00:29:12.617 Secure Channel: Not Required 00:29:12.617 Port ID: 0 (0x0000) 00:29:12.617 Controller ID: 65535 (0xffff) 00:29:12.617 Admin Max SQ Size: 128 00:29:12.617 Transport Service Identifier: 4420 00:29:12.617 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:12.617 Transport Address: 10.0.0.2 00:29:12.617 Discovery Log Entry 1 00:29:12.617 ---------------------- 00:29:12.617 Transport Type: 3 (TCP) 00:29:12.617 Address Family: 1 (IPv4) 00:29:12.617 Subsystem Type: 2 (NVM Subsystem) 00:29:12.617 Entry Flags: 00:29:12.617 Duplicate Returned Information: 0 00:29:12.617 Explicit Persistent Connection Support for Discovery: 0 00:29:12.617 Transport Requirements: 00:29:12.617 Secure Channel: Not Required 00:29:12.617 Port ID: 0 (0x0000) 00:29:12.617 Controller ID: 65535 (0xffff) 00:29:12.617 Admin Max SQ Size: 128 00:29:12.617 Transport Service Identifier: 4420 00:29:12.617 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:12.617 Transport Address: 10.0.0.2 [2024-12-05 12:12:37.498596] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:12.617 [2024-12-05 12:12:37.498608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a100) on tqpair=0x728690 00:29:12.617 [2024-12-05 12:12:37.498616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.617 [2024-12-05 12:12:37.498621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a280) on tqpair=0x728690 00:29:12.617 [2024-12-05 12:12:37.498626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.617 [2024-12-05 12:12:37.498631] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a400) on tqpair=0x728690 00:29:12.617 [2024-12-05 12:12:37.498636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.617 [2024-12-05 12:12:37.498641] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.617 [2024-12-05 12:12:37.498645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.617 [2024-12-05 12:12:37.498655] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.617 [2024-12-05 12:12:37.498659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.617 [2024-12-05 12:12:37.498662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.617 [2024-12-05 12:12:37.498670] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.617 [2024-12-05 12:12:37.498685] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.617 [2024-12-05 12:12:37.498886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.617 [2024-12-05 12:12:37.498893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.617 [2024-12-05 12:12:37.498896] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.617 [2024-12-05 12:12:37.498900] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.617 [2024-12-05 12:12:37.498908] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.617 [2024-12-05 12:12:37.498912] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.617 [2024-12-05 12:12:37.498915] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.617 [2024-12-05 12:12:37.498922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.617 [2024-12-05 12:12:37.498936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.617 [2024-12-05 12:12:37.499172] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.617 [2024-12-05 12:12:37.499179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.617 [2024-12-05 12:12:37.499183] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.617 [2024-12-05 12:12:37.499187] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.499195] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:12.618 [2024-12-05 12:12:37.499200] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:12.618 [2024-12-05 12:12:37.499210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499217] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.499224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.499235] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.499447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.499472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.499478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.499493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.499508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.499519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.499728] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.499734] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.499738] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499742] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.499751] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499759] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.499765] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.499776] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.499961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.499967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.499971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.499984] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.499992] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.499998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.500009] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.500193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.500200] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.500206] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.500219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500223] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500227] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.500234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.500244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.500436] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.500443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.500446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.500468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500472] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.500483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.500493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.500672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.500678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.500682] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500685] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.500695] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500699] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500703] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.500709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.500720] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.500942] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.500948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.500952] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500956] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.500965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500969] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.500973] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.500980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.500990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.501161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.501167] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.501171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.501187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501194] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.501201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.501211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.501429] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.501435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.501439] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501443] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.501453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501463] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501466] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.501473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.501484] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.501658] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.501665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.501669] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501673] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.501683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.501697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.501707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.501894] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.618 [2024-12-05 12:12:37.501901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.618 [2024-12-05 12:12:37.501904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501908] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.618 [2024-12-05 12:12:37.501918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.618 [2024-12-05 12:12:37.501926] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.618 [2024-12-05 12:12:37.501932] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.618 [2024-12-05 12:12:37.501943] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.618 [2024-12-05 12:12:37.502125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.502131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.502134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.502138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.619 [2024-12-05 12:12:37.502151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.502155] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.502159] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.619 [2024-12-05 12:12:37.502165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.619 [2024-12-05 12:12:37.502176] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.619 [2024-12-05 12:12:37.502368] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.502374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.502378] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.502382] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.619 [2024-12-05 12:12:37.502391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.502395] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.502399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x728690) 00:29:12.619 [2024-12-05 12:12:37.502406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.619 [2024-12-05 12:12:37.502416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x78a580, cid 3, qid 0 00:29:12.619 [2024-12-05 12:12:37.506467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.506476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.506480] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.506484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x78a580) on tqpair=0x728690 00:29:12.619 [2024-12-05 12:12:37.506493] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:29:12.619 00:29:12.619 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:12.619 [2024-12-05 12:12:37.552732] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:29:12.619 [2024-12-05 12:12:37.552796] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1459154 ] 00:29:12.619 [2024-12-05 12:12:37.609955] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:12.619 [2024-12-05 12:12:37.610015] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:12.619 [2024-12-05 12:12:37.610020] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:12.619 [2024-12-05 12:12:37.610041] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:12.619 [2024-12-05 12:12:37.610051] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:12.619 [2024-12-05 12:12:37.610624] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:12.619 [2024-12-05 12:12:37.610660] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13ee690 0 00:29:12.619 [2024-12-05 12:12:37.616470] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:12.619 [2024-12-05 12:12:37.616485] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:12.619 [2024-12-05 12:12:37.616494] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:12.619 [2024-12-05 12:12:37.616498] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:12.619 [2024-12-05 12:12:37.616532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.616538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.616542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.619 [2024-12-05 12:12:37.616556] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:12.619 [2024-12-05 12:12:37.616580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.619 [2024-12-05 12:12:37.624477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.624488] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.624491] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.619 [2024-12-05 12:12:37.624505] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:12.619 [2024-12-05 12:12:37.624513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:12.619 [2024-12-05 12:12:37.624519] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:12.619 [2024-12-05 12:12:37.624534] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624538] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.619 [2024-12-05 12:12:37.624550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.619 [2024-12-05 12:12:37.624566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.619 [2024-12-05 12:12:37.624646] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.624653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.624657] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624661] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.619 [2024-12-05 12:12:37.624668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:12.619 [2024-12-05 12:12:37.624676] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:12.619 [2024-12-05 12:12:37.624683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.619 [2024-12-05 12:12:37.624698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.619 [2024-12-05 12:12:37.624709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.619 [2024-12-05 12:12:37.624810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.624816] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.624820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624824] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.619 [2024-12-05 12:12:37.624830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:12.619 [2024-12-05 12:12:37.624842] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:12.619 [2024-12-05 12:12:37.624849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624856] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.619 [2024-12-05 12:12:37.624863] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.619 [2024-12-05 12:12:37.624874] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.619 [2024-12-05 12:12:37.624944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.619 [2024-12-05 12:12:37.624950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.619 [2024-12-05 12:12:37.624954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.619 [2024-12-05 12:12:37.624957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.619 [2024-12-05 12:12:37.624963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:12.619 [2024-12-05 12:12:37.624972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.624976] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.624980] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.624987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.620 [2024-12-05 12:12:37.624998] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.620 [2024-12-05 12:12:37.625070] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.620 [2024-12-05 12:12:37.625077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.620 [2024-12-05 12:12:37.625081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.620 [2024-12-05 12:12:37.625089] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:12.620 [2024-12-05 12:12:37.625094] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:12.620 [2024-12-05 12:12:37.625102] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:12.620 [2024-12-05 12:12:37.625211] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:12.620 [2024-12-05 12:12:37.625216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:12.620 [2024-12-05 12:12:37.625224] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.620 [2024-12-05 12:12:37.625249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.620 [2024-12-05 12:12:37.625319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.620 [2024-12-05 12:12:37.625325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.620 [2024-12-05 12:12:37.625329] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625333] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.620 [2024-12-05 12:12:37.625340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:12.620 [2024-12-05 12:12:37.625350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625357] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.620 [2024-12-05 12:12:37.625375] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.620 [2024-12-05 12:12:37.625447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.620 [2024-12-05 12:12:37.625460] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.620 [2024-12-05 12:12:37.625464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.620 [2024-12-05 12:12:37.625472] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:12.620 [2024-12-05 12:12:37.625477] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:12.620 [2024-12-05 12:12:37.625486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:12.620 [2024-12-05 12:12:37.625496] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:12.620 [2024-12-05 12:12:37.625505] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.620 [2024-12-05 12:12:37.625527] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.620 [2024-12-05 12:12:37.625631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.620 [2024-12-05 12:12:37.625638] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.620 [2024-12-05 12:12:37.625641] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625645] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=4096, cccid=0 00:29:12.620 [2024-12-05 12:12:37.625650] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450100) on tqpair(0x13ee690): expected_datao=0, payload_size=4096 00:29:12.620 [2024-12-05 12:12:37.625654] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625673] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625678] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.620 [2024-12-05 12:12:37.625757] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.620 [2024-12-05 12:12:37.625760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625764] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.620 [2024-12-05 12:12:37.625773] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:12.620 [2024-12-05 12:12:37.625778] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:12.620 [2024-12-05 12:12:37.625783] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:12.620 [2024-12-05 12:12:37.625789] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:12.620 [2024-12-05 12:12:37.625794] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:12.620 [2024-12-05 12:12:37.625798] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:12.620 [2024-12-05 12:12:37.625807] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:12.620 [2024-12-05 12:12:37.625814] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625818] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625822] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625829] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:12.620 [2024-12-05 12:12:37.625840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.620 [2024-12-05 12:12:37.625918] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.620 [2024-12-05 12:12:37.625924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.620 [2024-12-05 12:12:37.625928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625931] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.620 [2024-12-05 12:12:37.625938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.620 [2024-12-05 12:12:37.625958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625962] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625965] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.620 [2024-12-05 12:12:37.625977] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625981] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.625985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.625990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.620 [2024-12-05 12:12:37.625996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.626000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.620 [2024-12-05 12:12:37.626004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.620 [2024-12-05 12:12:37.626009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.620 [2024-12-05 12:12:37.626014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626031] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.621 [2024-12-05 12:12:37.626044] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.621 [2024-12-05 12:12:37.626056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450100, cid 0, qid 0 00:29:12.621 [2024-12-05 12:12:37.626061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450280, cid 1, qid 0 00:29:12.621 [2024-12-05 12:12:37.626066] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450400, cid 2, qid 0 00:29:12.621 [2024-12-05 12:12:37.626070] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.621 [2024-12-05 12:12:37.626075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.621 [2024-12-05 12:12:37.626193] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.621 [2024-12-05 12:12:37.626199] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.621 [2024-12-05 12:12:37.626202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626206] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.621 [2024-12-05 12:12:37.626211] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:12.621 [2024-12-05 12:12:37.626216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626241] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626249] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.621 [2024-12-05 12:12:37.626255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:12.621 [2024-12-05 12:12:37.626266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.621 [2024-12-05 12:12:37.626339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.621 [2024-12-05 12:12:37.626345] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.621 [2024-12-05 12:12:37.626349] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626353] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.621 [2024-12-05 12:12:37.626419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626428] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626436] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.621 [2024-12-05 12:12:37.626446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.621 [2024-12-05 12:12:37.626463] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.621 [2024-12-05 12:12:37.626541] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.621 [2024-12-05 12:12:37.626548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.621 [2024-12-05 12:12:37.626552] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626558] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=4096, cccid=4 00:29:12.621 [2024-12-05 12:12:37.626562] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450700) on tqpair(0x13ee690): expected_datao=0, payload_size=4096 00:29:12.621 [2024-12-05 12:12:37.626567] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626583] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626588] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.621 [2024-12-05 12:12:37.626671] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.621 [2024-12-05 12:12:37.626675] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626679] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.621 [2024-12-05 12:12:37.626691] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:12.621 [2024-12-05 12:12:37.626701] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626710] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.626717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626721] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.621 [2024-12-05 12:12:37.626727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.621 [2024-12-05 12:12:37.626739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.621 [2024-12-05 12:12:37.626846] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.621 [2024-12-05 12:12:37.626852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.621 [2024-12-05 12:12:37.626856] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626860] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=4096, cccid=4 00:29:12.621 [2024-12-05 12:12:37.626864] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450700) on tqpair(0x13ee690): expected_datao=0, payload_size=4096 00:29:12.621 [2024-12-05 12:12:37.626868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626875] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626879] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.621 [2024-12-05 12:12:37.626976] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.621 [2024-12-05 12:12:37.626980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.626983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.621 [2024-12-05 12:12:37.626996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.627016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.621 [2024-12-05 12:12:37.627023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.621 [2024-12-05 12:12:37.627033] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.621 [2024-12-05 12:12:37.627115] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.621 [2024-12-05 12:12:37.627121] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.621 [2024-12-05 12:12:37.627125] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.627128] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=4096, cccid=4 00:29:12.621 [2024-12-05 12:12:37.627133] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450700) on tqpair(0x13ee690): expected_datao=0, payload_size=4096 00:29:12.621 [2024-12-05 12:12:37.627137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.627153] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.627157] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.627257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.621 [2024-12-05 12:12:37.627263] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.621 [2024-12-05 12:12:37.627267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.621 [2024-12-05 12:12:37.627270] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.621 [2024-12-05 12:12:37.627281] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627290] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627298] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627304] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627315] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:12.621 [2024-12-05 12:12:37.627321] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:12.621 [2024-12-05 12:12:37.627325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:12.622 [2024-12-05 12:12:37.627331] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:12.622 [2024-12-05 12:12:37.627348] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627358] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627368] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:12.622 [2024-12-05 12:12:37.627392] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.622 [2024-12-05 12:12:37.627397] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450880, cid 5, qid 0 00:29:12.622 [2024-12-05 12:12:37.627492] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.627498] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.627504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.627515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.627521] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.627524] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627528] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450880) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.627537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450880, cid 5, qid 0 00:29:12.622 [2024-12-05 12:12:37.627631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.627637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.627641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450880) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.627655] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627665] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627676] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450880, cid 5, qid 0 00:29:12.622 [2024-12-05 12:12:37.627748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.627754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.627758] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627762] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450880) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.627771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627781] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627791] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450880, cid 5, qid 0 00:29:12.622 [2024-12-05 12:12:37.627868] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.627874] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.627877] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450880) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.627895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627913] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627917] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627935] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627939] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627952] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.627956] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13ee690) 00:29:12.622 [2024-12-05 12:12:37.627962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.622 [2024-12-05 12:12:37.627974] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450880, cid 5, qid 0 00:29:12.622 [2024-12-05 12:12:37.627979] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450700, cid 4, qid 0 00:29:12.622 [2024-12-05 12:12:37.627984] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450a00, cid 6, qid 0 00:29:12.622 [2024-12-05 12:12:37.627988] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450b80, cid 7, qid 0 00:29:12.622 [2024-12-05 12:12:37.628142] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.622 [2024-12-05 12:12:37.628148] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.622 [2024-12-05 12:12:37.628151] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628155] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=8192, cccid=5 00:29:12.622 [2024-12-05 12:12:37.628159] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450880) on tqpair(0x13ee690): expected_datao=0, payload_size=8192 00:29:12.622 [2024-12-05 12:12:37.628164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628240] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628244] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.622 [2024-12-05 12:12:37.628256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.622 [2024-12-05 12:12:37.628259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628263] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=512, cccid=4 00:29:12.622 [2024-12-05 12:12:37.628267] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450700) on tqpair(0x13ee690): expected_datao=0, payload_size=512 00:29:12.622 [2024-12-05 12:12:37.628271] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628311] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628315] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628320] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.622 [2024-12-05 12:12:37.628326] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.622 [2024-12-05 12:12:37.628330] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628333] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=512, cccid=6 00:29:12.622 [2024-12-05 12:12:37.628337] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450a00) on tqpair(0x13ee690): expected_datao=0, payload_size=512 00:29:12.622 [2024-12-05 12:12:37.628342] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628348] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628352] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628359] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:12.622 [2024-12-05 12:12:37.628365] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:12.622 [2024-12-05 12:12:37.628369] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628372] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13ee690): datao=0, datal=4096, cccid=7 00:29:12.622 [2024-12-05 12:12:37.628377] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1450b80) on tqpair(0x13ee690): expected_datao=0, payload_size=4096 00:29:12.622 [2024-12-05 12:12:37.628381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628388] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628391] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628401] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.628407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.628411] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450880) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.628426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.628432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.622 [2024-12-05 12:12:37.628436] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.622 [2024-12-05 12:12:37.628440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450700) on tqpair=0x13ee690 00:29:12.622 [2024-12-05 12:12:37.628450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.622 [2024-12-05 12:12:37.631838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.623 [2024-12-05 12:12:37.631845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.623 [2024-12-05 12:12:37.631849] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450a00) on tqpair=0x13ee690 00:29:12.623 [2024-12-05 12:12:37.631857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.623 [2024-12-05 12:12:37.631863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.623 [2024-12-05 12:12:37.631866] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.623 [2024-12-05 12:12:37.631870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450b80) on tqpair=0x13ee690 00:29:12.623 ===================================================== 00:29:12.623 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.623 ===================================================== 00:29:12.623 Controller Capabilities/Features 00:29:12.623 ================================ 00:29:12.623 Vendor ID: 8086 00:29:12.623 Subsystem Vendor ID: 8086 00:29:12.623 Serial Number: SPDK00000000000001 00:29:12.623 Model Number: SPDK bdev Controller 00:29:12.623 Firmware Version: 25.01 00:29:12.623 Recommended Arb Burst: 6 00:29:12.623 IEEE OUI Identifier: e4 d2 5c 00:29:12.623 Multi-path I/O 00:29:12.623 May have multiple subsystem ports: Yes 00:29:12.623 May have multiple controllers: Yes 00:29:12.623 Associated with SR-IOV VF: No 00:29:12.623 Max Data Transfer Size: 131072 00:29:12.623 Max Number of Namespaces: 32 00:29:12.623 Max Number of I/O Queues: 127 00:29:12.623 NVMe Specification Version (VS): 1.3 00:29:12.623 NVMe Specification Version (Identify): 1.3 00:29:12.623 Maximum Queue Entries: 128 00:29:12.623 Contiguous Queues Required: Yes 00:29:12.623 Arbitration Mechanisms Supported 00:29:12.623 Weighted Round Robin: Not Supported 00:29:12.623 Vendor Specific: Not Supported 00:29:12.623 Reset Timeout: 15000 ms 00:29:12.623 Doorbell Stride: 4 bytes 00:29:12.623 NVM Subsystem Reset: Not Supported 00:29:12.623 Command Sets Supported 00:29:12.623 NVM Command Set: Supported 00:29:12.623 Boot Partition: Not Supported 00:29:12.623 Memory Page Size Minimum: 4096 bytes 00:29:12.623 Memory Page Size Maximum: 4096 bytes 00:29:12.623 Persistent Memory Region: Not Supported 00:29:12.623 Optional Asynchronous Events Supported 00:29:12.623 Namespace Attribute Notices: Supported 00:29:12.623 Firmware Activation Notices: Not Supported 00:29:12.623 ANA Change Notices: Not Supported 00:29:12.623 PLE Aggregate Log Change Notices: Not Supported 00:29:12.623 LBA Status Info Alert Notices: Not Supported 00:29:12.623 EGE Aggregate Log Change Notices: Not Supported 00:29:12.623 Normal NVM Subsystem Shutdown event: Not Supported 00:29:12.623 Zone Descriptor Change Notices: Not Supported 00:29:12.623 Discovery Log Change Notices: Not Supported 00:29:12.623 Controller Attributes 00:29:12.623 128-bit Host Identifier: Supported 00:29:12.623 Non-Operational Permissive Mode: Not Supported 00:29:12.623 NVM Sets: Not Supported 00:29:12.623 Read Recovery Levels: Not Supported 00:29:12.623 Endurance Groups: Not Supported 00:29:12.623 Predictable Latency Mode: Not Supported 00:29:12.623 Traffic Based Keep ALive: Not Supported 00:29:12.623 Namespace Granularity: Not Supported 00:29:12.623 SQ Associations: Not Supported 00:29:12.623 UUID List: Not Supported 00:29:12.623 Multi-Domain Subsystem: Not Supported 00:29:12.623 Fixed Capacity Management: Not Supported 00:29:12.623 Variable Capacity Management: Not Supported 00:29:12.623 Delete Endurance Group: Not Supported 00:29:12.623 Delete NVM Set: Not Supported 00:29:12.623 Extended LBA Formats Supported: Not Supported 00:29:12.623 Flexible Data Placement Supported: Not Supported 00:29:12.623 00:29:12.623 Controller Memory Buffer Support 00:29:12.623 ================================ 00:29:12.623 Supported: No 00:29:12.623 00:29:12.623 Persistent Memory Region Support 00:29:12.623 ================================ 00:29:12.623 Supported: No 00:29:12.623 00:29:12.623 Admin Command Set Attributes 00:29:12.623 ============================ 00:29:12.623 Security Send/Receive: Not Supported 00:29:12.623 Format NVM: Not Supported 00:29:12.623 Firmware Activate/Download: Not Supported 00:29:12.623 Namespace Management: Not Supported 00:29:12.623 Device Self-Test: Not Supported 00:29:12.623 Directives: Not Supported 00:29:12.623 NVMe-MI: Not Supported 00:29:12.623 Virtualization Management: Not Supported 00:29:12.623 Doorbell Buffer Config: Not Supported 00:29:12.623 Get LBA Status Capability: Not Supported 00:29:12.623 Command & Feature Lockdown Capability: Not Supported 00:29:12.623 Abort Command Limit: 4 00:29:12.623 Async Event Request Limit: 4 00:29:12.623 Number of Firmware Slots: N/A 00:29:12.623 Firmware Slot 1 Read-Only: N/A 00:29:12.623 Firmware Activation Without Reset: N/A 00:29:12.623 Multiple Update Detection Support: N/A 00:29:12.623 Firmware Update Granularity: No Information Provided 00:29:12.623 Per-Namespace SMART Log: No 00:29:12.623 Asymmetric Namespace Access Log Page: Not Supported 00:29:12.623 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:12.623 Command Effects Log Page: Supported 00:29:12.623 Get Log Page Extended Data: Supported 00:29:12.623 Telemetry Log Pages: Not Supported 00:29:12.623 Persistent Event Log Pages: Not Supported 00:29:12.623 Supported Log Pages Log Page: May Support 00:29:12.623 Commands Supported & Effects Log Page: Not Supported 00:29:12.623 Feature Identifiers & Effects Log Page:May Support 00:29:12.623 NVMe-MI Commands & Effects Log Page: May Support 00:29:12.623 Data Area 4 for Telemetry Log: Not Supported 00:29:12.623 Error Log Page Entries Supported: 128 00:29:12.623 Keep Alive: Supported 00:29:12.623 Keep Alive Granularity: 10000 ms 00:29:12.623 00:29:12.623 NVM Command Set Attributes 00:29:12.623 ========================== 00:29:12.623 Submission Queue Entry Size 00:29:12.623 Max: 64 00:29:12.623 Min: 64 00:29:12.623 Completion Queue Entry Size 00:29:12.623 Max: 16 00:29:12.623 Min: 16 00:29:12.623 Number of Namespaces: 32 00:29:12.623 Compare Command: Supported 00:29:12.623 Write Uncorrectable Command: Not Supported 00:29:12.623 Dataset Management Command: Supported 00:29:12.623 Write Zeroes Command: Supported 00:29:12.623 Set Features Save Field: Not Supported 00:29:12.623 Reservations: Supported 00:29:12.623 Timestamp: Not Supported 00:29:12.623 Copy: Supported 00:29:12.623 Volatile Write Cache: Present 00:29:12.623 Atomic Write Unit (Normal): 1 00:29:12.623 Atomic Write Unit (PFail): 1 00:29:12.623 Atomic Compare & Write Unit: 1 00:29:12.623 Fused Compare & Write: Supported 00:29:12.623 Scatter-Gather List 00:29:12.623 SGL Command Set: Supported 00:29:12.623 SGL Keyed: Supported 00:29:12.623 SGL Bit Bucket Descriptor: Not Supported 00:29:12.623 SGL Metadata Pointer: Not Supported 00:29:12.623 Oversized SGL: Not Supported 00:29:12.623 SGL Metadata Address: Not Supported 00:29:12.623 SGL Offset: Supported 00:29:12.623 Transport SGL Data Block: Not Supported 00:29:12.623 Replay Protected Memory Block: Not Supported 00:29:12.623 00:29:12.623 Firmware Slot Information 00:29:12.623 ========================= 00:29:12.623 Active slot: 1 00:29:12.623 Slot 1 Firmware Revision: 25.01 00:29:12.623 00:29:12.623 00:29:12.623 Commands Supported and Effects 00:29:12.623 ============================== 00:29:12.623 Admin Commands 00:29:12.623 -------------- 00:29:12.623 Get Log Page (02h): Supported 00:29:12.623 Identify (06h): Supported 00:29:12.623 Abort (08h): Supported 00:29:12.623 Set Features (09h): Supported 00:29:12.623 Get Features (0Ah): Supported 00:29:12.623 Asynchronous Event Request (0Ch): Supported 00:29:12.623 Keep Alive (18h): Supported 00:29:12.623 I/O Commands 00:29:12.623 ------------ 00:29:12.623 Flush (00h): Supported LBA-Change 00:29:12.623 Write (01h): Supported LBA-Change 00:29:12.623 Read (02h): Supported 00:29:12.623 Compare (05h): Supported 00:29:12.623 Write Zeroes (08h): Supported LBA-Change 00:29:12.623 Dataset Management (09h): Supported LBA-Change 00:29:12.623 Copy (19h): Supported LBA-Change 00:29:12.623 00:29:12.623 Error Log 00:29:12.623 ========= 00:29:12.623 00:29:12.623 Arbitration 00:29:12.623 =========== 00:29:12.623 Arbitration Burst: 1 00:29:12.623 00:29:12.623 Power Management 00:29:12.623 ================ 00:29:12.623 Number of Power States: 1 00:29:12.623 Current Power State: Power State #0 00:29:12.623 Power State #0: 00:29:12.623 Max Power: 0.00 W 00:29:12.623 Non-Operational State: Operational 00:29:12.623 Entry Latency: Not Reported 00:29:12.623 Exit Latency: Not Reported 00:29:12.623 Relative Read Throughput: 0 00:29:12.623 Relative Read Latency: 0 00:29:12.623 Relative Write Throughput: 0 00:29:12.623 Relative Write Latency: 0 00:29:12.623 Idle Power: Not Reported 00:29:12.623 Active Power: Not Reported 00:29:12.623 Non-Operational Permissive Mode: Not Supported 00:29:12.623 00:29:12.623 Health Information 00:29:12.623 ================== 00:29:12.624 Critical Warnings: 00:29:12.624 Available Spare Space: OK 00:29:12.624 Temperature: OK 00:29:12.624 Device Reliability: OK 00:29:12.624 Read Only: No 00:29:12.624 Volatile Memory Backup: OK 00:29:12.624 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:12.624 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:12.624 Available Spare: 0% 00:29:12.624 Available Spare Threshold: 0% 00:29:12.624 Life Percentage Used:[2024-12-05 12:12:37.635564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.635580] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.635602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450b80, cid 7, qid 0 00:29:12.624 [2024-12-05 12:12:37.635695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.635702] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.635705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450b80) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.635750] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:12.624 [2024-12-05 12:12:37.635761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450100) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.635767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.624 [2024-12-05 12:12:37.635773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450280) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.635778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.624 [2024-12-05 12:12:37.635787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450400) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.635792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.624 [2024-12-05 12:12:37.635797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.635801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:12.624 [2024-12-05 12:12:37.635810] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635814] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635818] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.635825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.635840] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.635907] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.635913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.635917] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.635928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635932] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.635935] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.635942] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.635957] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636053] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636058] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:12.624 [2024-12-05 12:12:37.636063] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:12.624 [2024-12-05 12:12:37.636072] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636076] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636079] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636096] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636289] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636295] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636298] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636312] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636316] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636319] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636414] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636417] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636438] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636543] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636546] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636550] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636560] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636567] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636680] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636703] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636707] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636730] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.624 [2024-12-05 12:12:37.636823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636826] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636830] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.624 [2024-12-05 12:12:37.636837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.624 [2024-12-05 12:12:37.636848] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.624 [2024-12-05 12:12:37.636944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.624 [2024-12-05 12:12:37.636950] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.624 [2024-12-05 12:12:37.636954] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.624 [2024-12-05 12:12:37.636957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.636968] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.636972] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.636976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.636982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.636993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637068] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637072] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637085] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637106] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637190] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637204] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637207] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637227] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637290] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637300] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637304] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637318] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637328] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637339] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637535] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637538] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637649] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637655] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637658] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637662] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637677] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637680] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637764] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637770] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637795] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637882] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.637888] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.637892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637896] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.637907] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637910] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.637914] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.637921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.637931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.637999] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.638005] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.638009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.638012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.638022] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.638026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.638030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.625 [2024-12-05 12:12:37.638037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.625 [2024-12-05 12:12:37.638048] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.625 [2024-12-05 12:12:37.638146] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.625 [2024-12-05 12:12:37.638152] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.625 [2024-12-05 12:12:37.638155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.638159] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.625 [2024-12-05 12:12:37.638170] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.625 [2024-12-05 12:12:37.638174] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638178] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638185] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638263] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638292] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638303] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638313] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638388] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638391] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638412] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638429] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638502] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638512] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638516] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638526] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638530] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638619] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638625] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638628] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638632] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638649] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638736] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638762] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638766] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638769] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638776] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638786] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638860] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638863] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638867] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638884] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.638890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.638901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.638973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.638979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.638983] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.638986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.638996] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639000] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639004] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.639010] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.639020] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.639094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.639100] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.639104] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639107] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.639117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639125] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.639131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.639142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.639208] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.639214] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.639218] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.639234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.639250] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.639262] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.639327] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.639333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.639337] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.639351] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.639365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.639376] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.639446] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.639452] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.639463] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639467] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.639477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639481] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.639485] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.626 [2024-12-05 12:12:37.639492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.626 [2024-12-05 12:12:37.639502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.626 [2024-12-05 12:12:37.643466] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.626 [2024-12-05 12:12:37.643475] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.626 [2024-12-05 12:12:37.643478] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.643482] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.626 [2024-12-05 12:12:37.643493] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:12.626 [2024-12-05 12:12:37.643497] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:12.627 [2024-12-05 12:12:37.643501] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13ee690) 00:29:12.627 [2024-12-05 12:12:37.643508] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:12.627 [2024-12-05 12:12:37.643519] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1450580, cid 3, qid 0 00:29:12.627 [2024-12-05 12:12:37.643585] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:12.627 [2024-12-05 12:12:37.643591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:12.627 [2024-12-05 12:12:37.643594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:12.627 [2024-12-05 12:12:37.643598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1450580) on tqpair=0x13ee690 00:29:12.627 [2024-12-05 12:12:37.643609] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:29:12.627 0% 00:29:12.627 Data Units Read: 0 00:29:12.627 Data Units Written: 0 00:29:12.627 Host Read Commands: 0 00:29:12.627 Host Write Commands: 0 00:29:12.627 Controller Busy Time: 0 minutes 00:29:12.627 Power Cycles: 0 00:29:12.627 Power On Hours: 0 hours 00:29:12.627 Unsafe Shutdowns: 0 00:29:12.627 Unrecoverable Media Errors: 0 00:29:12.627 Lifetime Error Log Entries: 0 00:29:12.627 Warning Temperature Time: 0 minutes 00:29:12.627 Critical Temperature Time: 0 minutes 00:29:12.627 00:29:12.627 Number of Queues 00:29:12.627 ================ 00:29:12.627 Number of I/O Submission Queues: 127 00:29:12.627 Number of I/O Completion Queues: 127 00:29:12.627 00:29:12.627 Active Namespaces 00:29:12.627 ================= 00:29:12.627 Namespace ID:1 00:29:12.627 Error Recovery Timeout: Unlimited 00:29:12.627 Command Set Identifier: NVM (00h) 00:29:12.627 Deallocate: Supported 00:29:12.627 Deallocated/Unwritten Error: Not Supported 00:29:12.627 Deallocated Read Value: Unknown 00:29:12.627 Deallocate in Write Zeroes: Not Supported 00:29:12.627 Deallocated Guard Field: 0xFFFF 00:29:12.627 Flush: Supported 00:29:12.627 Reservation: Supported 00:29:12.627 Namespace Sharing Capabilities: Multiple Controllers 00:29:12.627 Size (in LBAs): 131072 (0GiB) 00:29:12.627 Capacity (in LBAs): 131072 (0GiB) 00:29:12.627 Utilization (in LBAs): 131072 (0GiB) 00:29:12.627 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:12.627 EUI64: ABCDEF0123456789 00:29:12.627 UUID: 6b78359f-faf4-416b-9579-5e2bbe369368 00:29:12.627 Thin Provisioning: Not Supported 00:29:12.627 Per-NS Atomic Units: Yes 00:29:12.627 Atomic Boundary Size (Normal): 0 00:29:12.627 Atomic Boundary Size (PFail): 0 00:29:12.627 Atomic Boundary Offset: 0 00:29:12.627 Maximum Single Source Range Length: 65535 00:29:12.627 Maximum Copy Length: 65535 00:29:12.627 Maximum Source Range Count: 1 00:29:12.627 NGUID/EUI64 Never Reused: No 00:29:12.627 Namespace Write Protected: No 00:29:12.627 Number of LBA Formats: 1 00:29:12.627 Current LBA Format: LBA Format #00 00:29:12.627 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:12.627 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:12.888 rmmod nvme_tcp 00:29:12.888 rmmod nvme_fabrics 00:29:12.888 rmmod nvme_keyring 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 1458954 ']' 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 1458954 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1458954 ']' 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1458954 00:29:12.888 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1458954 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1458954' 00:29:12.889 killing process with pid 1458954 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1458954 00:29:12.889 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1458954 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:13.149 12:12:37 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:29:15.061 00:29:15.061 real 0m11.808s 00:29:15.061 user 0m8.575s 00:29:15.061 sys 0m6.270s 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.061 12:12:40 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:15.061 ************************************ 00:29:15.061 END TEST nvmf_identify 00:29:15.061 ************************************ 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:15.322 ************************************ 00:29:15.322 START TEST nvmf_perf 00:29:15.322 ************************************ 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:15.322 * Looking for test storage... 00:29:15.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:15.322 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.583 --rc genhtml_branch_coverage=1 00:29:15.583 --rc genhtml_function_coverage=1 00:29:15.583 --rc genhtml_legend=1 00:29:15.583 --rc geninfo_all_blocks=1 00:29:15.583 --rc geninfo_unexecuted_blocks=1 00:29:15.583 00:29:15.583 ' 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.583 --rc genhtml_branch_coverage=1 00:29:15.583 --rc genhtml_function_coverage=1 00:29:15.583 --rc genhtml_legend=1 00:29:15.583 --rc geninfo_all_blocks=1 00:29:15.583 --rc geninfo_unexecuted_blocks=1 00:29:15.583 00:29:15.583 ' 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.583 --rc genhtml_branch_coverage=1 00:29:15.583 --rc genhtml_function_coverage=1 00:29:15.583 --rc genhtml_legend=1 00:29:15.583 --rc geninfo_all_blocks=1 00:29:15.583 --rc geninfo_unexecuted_blocks=1 00:29:15.583 00:29:15.583 ' 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:15.583 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:15.583 --rc genhtml_branch_coverage=1 00:29:15.583 --rc genhtml_function_coverage=1 00:29:15.583 --rc genhtml_legend=1 00:29:15.583 --rc geninfo_all_blocks=1 00:29:15.583 --rc geninfo_unexecuted_blocks=1 00:29:15.583 00:29:15.583 ' 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:15.583 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:15.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:29:15.584 12:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:23.730 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:23.730 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:23.730 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:23.730 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@247 -- # create_target_ns 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:23.730 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:23.731 10.0.0.1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:23.731 10.0.0.2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:23.731 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:23.731 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.614 ms 00:29:23.731 00:29:23.731 --- 10.0.0.1 ping statistics --- 00:29:23.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.731 rtt min/avg/max/mdev = 0.614/0.614/0.614/0.000 ms 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:23.731 12:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:23.731 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:23.731 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:29:23.731 00:29:23.731 --- 10.0.0.2 ping statistics --- 00:29:23.731 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:23.731 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:23.731 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:23.732 ' 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=1463391 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 1463391 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1463391 ']' 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:23.732 12:12:48 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:23.732 [2024-12-05 12:12:48.183378] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:29:23.732 [2024-12-05 12:12:48.183442] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.732 [2024-12-05 12:12:48.280606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.732 [2024-12-05 12:12:48.334395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.732 [2024-12-05 12:12:48.334447] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.732 [2024-12-05 12:12:48.334466] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:23.732 [2024-12-05 12:12:48.334474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:23.732 [2024-12-05 12:12:48.334480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.732 [2024-12-05 12:12:48.336982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:23.732 [2024-12-05 12:12:48.337145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:23.732 [2024-12-05 12:12:48.337305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:23.732 [2024-12-05 12:12:48.337306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:23.993 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:23.993 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:29:23.993 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:23.993 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:23.993 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:24.253 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.253 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:24.253 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:29:24.825 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:29:24.825 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:29:24.825 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:29:24.825 12:12:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:25.086 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:29:25.086 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:29:25.086 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:29:25.086 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:29:25.086 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:29:25.347 [2024-12-05 12:12:50.171567] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.347 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.608 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:25.608 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.608 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:29:25.608 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:25.869 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:26.153 [2024-12-05 12:12:50.926360] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.153 12:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:26.153 12:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:29:26.153 12:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:26.153 12:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:29:26.153 12:12:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:29:27.537 Initializing NVMe Controllers 00:29:27.537 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:29:27.537 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:29:27.537 Initialization complete. Launching workers. 00:29:27.537 ======================================================== 00:29:27.537 Latency(us) 00:29:27.537 Device Information : IOPS MiB/s Average min max 00:29:27.537 PCIE (0000:65:00.0) NSID 1 from core 0: 77555.97 302.95 412.09 13.40 5053.61 00:29:27.537 ======================================================== 00:29:27.537 Total : 77555.97 302.95 412.09 13.40 5053.61 00:29:27.537 00:29:27.537 12:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.003 Initializing NVMe Controllers 00:29:29.003 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.003 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:29.003 Initialization complete. Launching workers. 00:29:29.003 ======================================================== 00:29:29.003 Latency(us) 00:29:29.003 Device Information : IOPS MiB/s Average min max 00:29:29.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 76.00 0.30 13222.17 105.94 45582.72 00:29:29.003 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 52.00 0.20 19626.24 7961.52 54875.40 00:29:29.004 ======================================================== 00:29:29.004 Total : 128.00 0.50 15823.82 105.94 54875.40 00:29:29.004 00:29:29.004 12:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:29.944 Initializing NVMe Controllers 00:29:29.944 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:29.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:29.944 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:29.944 Initialization complete. Launching workers. 00:29:29.944 ======================================================== 00:29:29.944 Latency(us) 00:29:29.944 Device Information : IOPS MiB/s Average min max 00:29:29.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11799.00 46.09 2713.01 392.47 6269.21 00:29:29.945 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3856.00 15.06 8343.95 4949.64 18542.56 00:29:29.945 ======================================================== 00:29:29.945 Total : 15655.00 61.15 4099.97 392.47 18542.56 00:29:29.945 00:29:29.945 12:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:29:29.945 12:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:29:29.945 12:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:32.488 Initializing NVMe Controllers 00:29:32.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.488 Controller IO queue size 128, less than required. 00:29:32.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:32.488 Controller IO queue size 128, less than required. 00:29:32.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:32.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:32.488 Initialization complete. Launching workers. 00:29:32.488 ======================================================== 00:29:32.488 Latency(us) 00:29:32.488 Device Information : IOPS MiB/s Average min max 00:29:32.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1805.50 451.37 71531.34 40808.39 123070.23 00:29:32.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 639.00 159.75 209347.64 55858.33 321020.05 00:29:32.488 ======================================================== 00:29:32.488 Total : 2444.50 611.12 107556.95 40808.39 321020.05 00:29:32.488 00:29:32.488 12:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:29:32.747 No valid NVMe controllers or AIO or URING devices found 00:29:32.747 Initializing NVMe Controllers 00:29:32.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.747 Controller IO queue size 128, less than required. 00:29:32.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:32.747 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:29:32.747 Controller IO queue size 128, less than required. 00:29:32.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:32.747 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:29:32.747 WARNING: Some requested NVMe devices were skipped 00:29:32.747 12:12:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:29:35.294 Initializing NVMe Controllers 00:29:35.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:35.294 Controller IO queue size 128, less than required. 00:29:35.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:35.294 Controller IO queue size 128, less than required. 00:29:35.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:35.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:35.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:35.294 Initialization complete. Launching workers. 00:29:35.294 00:29:35.294 ==================== 00:29:35.294 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:29:35.294 TCP transport: 00:29:35.294 polls: 34551 00:29:35.294 idle_polls: 19749 00:29:35.294 sock_completions: 14802 00:29:35.294 nvme_completions: 7401 00:29:35.294 submitted_requests: 11082 00:29:35.294 queued_requests: 1 00:29:35.294 00:29:35.294 ==================== 00:29:35.294 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:29:35.294 TCP transport: 00:29:35.294 polls: 36284 00:29:35.294 idle_polls: 22767 00:29:35.294 sock_completions: 13517 00:29:35.294 nvme_completions: 7353 00:29:35.294 submitted_requests: 11120 00:29:35.294 queued_requests: 1 00:29:35.294 ======================================================== 00:29:35.294 Latency(us) 00:29:35.294 Device Information : IOPS MiB/s Average min max 00:29:35.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1849.99 462.50 70003.35 39871.94 115977.48 00:29:35.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1837.99 459.50 70613.73 27662.06 134517.37 00:29:35.294 ======================================================== 00:29:35.294 Total : 3687.98 922.00 70307.55 27662.06 134517.37 00:29:35.294 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:35.555 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:35.555 rmmod nvme_tcp 00:29:35.555 rmmod nvme_fabrics 00:29:35.816 rmmod nvme_keyring 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 1463391 ']' 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 1463391 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1463391 ']' 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1463391 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1463391 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1463391' 00:29:35.816 killing process with pid 1463391 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1463391 00:29:35.816 12:13:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1463391 00:29:37.730 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:37.730 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:29:37.730 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:29:37.731 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:37.731 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:37.731 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:37.731 12:13:02 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:29:40.276 00:29:40.276 real 0m24.573s 00:29:40.276 user 0m58.930s 00:29:40.276 sys 0m8.748s 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:40.276 ************************************ 00:29:40.276 END TEST nvmf_perf 00:29:40.276 ************************************ 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.276 ************************************ 00:29:40.276 START TEST nvmf_fio_host 00:29:40.276 ************************************ 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:40.276 * Looking for test storage... 00:29:40.276 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:40.276 12:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:40.276 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.277 --rc genhtml_branch_coverage=1 00:29:40.277 --rc genhtml_function_coverage=1 00:29:40.277 --rc genhtml_legend=1 00:29:40.277 --rc geninfo_all_blocks=1 00:29:40.277 --rc geninfo_unexecuted_blocks=1 00:29:40.277 00:29:40.277 ' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.277 --rc genhtml_branch_coverage=1 00:29:40.277 --rc genhtml_function_coverage=1 00:29:40.277 --rc genhtml_legend=1 00:29:40.277 --rc geninfo_all_blocks=1 00:29:40.277 --rc geninfo_unexecuted_blocks=1 00:29:40.277 00:29:40.277 ' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.277 --rc genhtml_branch_coverage=1 00:29:40.277 --rc genhtml_function_coverage=1 00:29:40.277 --rc genhtml_legend=1 00:29:40.277 --rc geninfo_all_blocks=1 00:29:40.277 --rc geninfo_unexecuted_blocks=1 00:29:40.277 00:29:40.277 ' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:40.277 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:40.277 --rc genhtml_branch_coverage=1 00:29:40.277 --rc genhtml_function_coverage=1 00:29:40.277 --rc genhtml_legend=1 00:29:40.277 --rc geninfo_all_blocks=1 00:29:40.277 --rc geninfo_unexecuted_blocks=1 00:29:40.277 00:29:40.277 ' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:40.277 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:29:40.277 12:13:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:48.415 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:48.416 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:48.416 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:48.416 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:48.416 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@247 -- # create_target_ns 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:48.416 10.0.0.1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:48.416 10.0.0.2 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:48.416 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:48.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:48.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.637 ms 00:29:48.417 00:29:48.417 --- 10.0.0.1 ping statistics --- 00:29:48.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.417 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:48.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:48.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:29:48.417 00:29:48.417 --- 10.0.0.2 ping statistics --- 00:29:48.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:48.417 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.417 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:29:48.418 ' 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1470429 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1470429 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1470429 ']' 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:48.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.418 12:13:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.418 [2024-12-05 12:13:12.764836] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:29:48.418 [2024-12-05 12:13:12.764904] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:48.418 [2024-12-05 12:13:12.862358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.418 [2024-12-05 12:13:12.915612] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.418 [2024-12-05 12:13:12.915661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.418 [2024-12-05 12:13:12.915670] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.418 [2024-12-05 12:13:12.915677] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.418 [2024-12-05 12:13:12.915684] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.418 [2024-12-05 12:13:12.917788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.418 [2024-12-05 12:13:12.917947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:48.418 [2024-12-05 12:13:12.918109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:48.418 [2024-12-05 12:13:12.918110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:48.681 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.681 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:29:48.681 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:48.942 [2024-12-05 12:13:13.752488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.942 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:48.942 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:48.942 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.942 12:13:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:49.203 Malloc1 00:29:49.203 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.465 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:49.465 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.727 [2024-12-05 12:13:14.592027] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.727 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:49.989 12:13:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:50.249 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:50.249 fio-3.35 00:29:50.249 Starting 1 thread 00:29:52.805 00:29:52.805 test: (groupid=0, jobs=1): err= 0: pid=1471264: Thu Dec 5 12:13:17 2024 00:29:52.805 read: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(82.0MiB/2005msec) 00:29:52.805 slat (usec): min=2, max=285, avg= 2.20, stdev= 2.87 00:29:52.805 clat (usec): min=3711, max=9267, avg=6758.10, stdev=1123.31 00:29:52.805 lat (usec): min=3760, max=9274, avg=6760.30, stdev=1123.31 00:29:52.805 clat percentiles (usec): 00:29:52.805 | 1.00th=[ 4555], 5.00th=[ 4817], 10.00th=[ 5014], 20.00th=[ 5276], 00:29:52.805 | 30.00th=[ 6390], 40.00th=[ 6915], 50.00th=[ 7177], 60.00th=[ 7308], 00:29:52.805 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8160], 00:29:52.805 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[ 8979], 99.95th=[ 8979], 00:29:52.805 | 99.99th=[ 9110] 00:29:52.805 bw ( KiB/s): min=37696, max=52600, per=99.91%, avg=41850.00, stdev=7178.07, samples=4 00:29:52.805 iops : min= 9424, max=13150, avg=10462.50, stdev=1794.52, samples=4 00:29:52.805 write: IOPS=10.5k, BW=40.9MiB/s (42.9MB/s)(82.0MiB/2005msec); 0 zone resets 00:29:52.805 slat (usec): min=2, max=270, avg= 2.27, stdev= 2.06 00:29:52.805 clat (usec): min=2920, max=8178, avg=5431.00, stdev=895.03 00:29:52.805 lat (usec): min=2937, max=8180, avg=5433.26, stdev=895.07 00:29:52.805 clat percentiles (usec): 00:29:52.805 | 1.00th=[ 3654], 5.00th=[ 3884], 10.00th=[ 4047], 20.00th=[ 4293], 00:29:52.805 | 30.00th=[ 5080], 40.00th=[ 5538], 50.00th=[ 5735], 60.00th=[ 5866], 00:29:52.805 | 70.00th=[ 5997], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6521], 00:29:52.805 | 99.00th=[ 6849], 99.50th=[ 7046], 99.90th=[ 7308], 99.95th=[ 7635], 00:29:52.805 | 99.99th=[ 8029] 00:29:52.805 bw ( KiB/s): min=38056, max=52856, per=99.98%, avg=41888.00, stdev=7313.07, samples=4 00:29:52.805 iops : min= 9514, max=13214, avg=10472.00, stdev=1828.27, samples=4 00:29:52.805 lat (msec) : 4=4.25%, 10=95.75% 00:29:52.805 cpu : usr=74.65%, sys=24.30%, ctx=20, majf=0, minf=15 00:29:52.805 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:52.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:52.805 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:52.805 issued rwts: total=20996,21000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:52.805 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:52.805 00:29:52.805 Run status group 0 (all jobs): 00:29:52.805 READ: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=82.0MiB (86.0MB), run=2005-2005msec 00:29:52.805 WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=82.0MiB (86.0MB), run=2005-2005msec 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:52.805 12:13:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:53.067 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:53.067 fio-3.35 00:29:53.067 Starting 1 thread 00:29:55.609 00:29:55.609 test: (groupid=0, jobs=1): err= 0: pid=1471792: Thu Dec 5 12:13:20 2024 00:29:55.609 read: IOPS=9693, BW=151MiB/s (159MB/s)(304MiB/2005msec) 00:29:55.609 slat (usec): min=3, max=110, avg= 3.60, stdev= 1.57 00:29:55.609 clat (usec): min=1479, max=17098, avg=7928.99, stdev=1993.82 00:29:55.609 lat (usec): min=1483, max=17101, avg=7932.59, stdev=1993.94 00:29:55.609 clat percentiles (usec): 00:29:55.609 | 1.00th=[ 3916], 5.00th=[ 4948], 10.00th=[ 5473], 20.00th=[ 6194], 00:29:55.609 | 30.00th=[ 6718], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8356], 00:29:55.609 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11207], 00:29:55.609 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13829], 99.95th=[13960], 00:29:55.609 | 99.99th=[14615] 00:29:55.609 bw ( KiB/s): min=71744, max=87136, per=50.13%, avg=77744.00, stdev=6901.84, samples=4 00:29:55.609 iops : min= 4484, max= 5446, avg=4859.00, stdev=431.37, samples=4 00:29:55.609 write: IOPS=5688, BW=88.9MiB/s (93.2MB/s)(159MiB/1786msec); 0 zone resets 00:29:55.609 slat (usec): min=39, max=309, avg=40.89, stdev= 6.98 00:29:55.609 clat (usec): min=2250, max=14926, avg=8957.29, stdev=1416.41 00:29:55.609 lat (usec): min=2290, max=15058, avg=8998.18, stdev=1417.96 00:29:55.609 clat percentiles (usec): 00:29:55.609 | 1.00th=[ 5735], 5.00th=[ 6980], 10.00th=[ 7373], 20.00th=[ 7767], 00:29:55.609 | 30.00th=[ 8160], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:29:55.609 | 70.00th=[ 9634], 80.00th=[10159], 90.00th=[10683], 95.00th=[11338], 00:29:55.609 | 99.00th=[12518], 99.50th=[13304], 99.90th=[14484], 99.95th=[14615], 00:29:55.609 | 99.99th=[14877] 00:29:55.609 bw ( KiB/s): min=75520, max=90592, per=88.83%, avg=80848.00, stdev=6876.77, samples=4 00:29:55.609 iops : min= 4720, max= 5662, avg=5053.00, stdev=429.80, samples=4 00:29:55.609 lat (msec) : 2=0.05%, 4=0.77%, 10=79.50%, 20=19.69% 00:29:55.609 cpu : usr=86.08%, sys=13.07%, ctx=11, majf=0, minf=31 00:29:55.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:29:55.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:55.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:55.609 issued rwts: total=19435,10159,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:55.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:55.609 00:29:55.609 Run status group 0 (all jobs): 00:29:55.609 READ: bw=151MiB/s (159MB/s), 151MiB/s-151MiB/s (159MB/s-159MB/s), io=304MiB (318MB), run=2005-2005msec 00:29:55.609 WRITE: bw=88.9MiB/s (93.2MB/s), 88.9MiB/s-88.9MiB/s (93.2MB/s-93.2MB/s), io=159MiB (166MB), run=1786-1786msec 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:55.609 rmmod nvme_tcp 00:29:55.609 rmmod nvme_fabrics 00:29:55.609 rmmod nvme_keyring 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 1470429 ']' 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 1470429 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1470429 ']' 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1470429 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1470429 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1470429' 00:29:55.609 killing process with pid 1470429 00:29:55.609 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1470429 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1470429 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:55.610 12:13:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:58.156 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:29:58.157 00:29:58.157 real 0m17.878s 00:29:58.157 user 1m4.875s 00:29:58.157 sys 0m7.734s 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 ************************************ 00:29:58.157 END TEST nvmf_fio_host 00:29:58.157 ************************************ 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:58.157 ************************************ 00:29:58.157 START TEST nvmf_failover 00:29:58.157 ************************************ 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:29:58.157 * Looking for test storage... 00:29:58.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.157 --rc genhtml_branch_coverage=1 00:29:58.157 --rc genhtml_function_coverage=1 00:29:58.157 --rc genhtml_legend=1 00:29:58.157 --rc geninfo_all_blocks=1 00:29:58.157 --rc geninfo_unexecuted_blocks=1 00:29:58.157 00:29:58.157 ' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.157 --rc genhtml_branch_coverage=1 00:29:58.157 --rc genhtml_function_coverage=1 00:29:58.157 --rc genhtml_legend=1 00:29:58.157 --rc geninfo_all_blocks=1 00:29:58.157 --rc geninfo_unexecuted_blocks=1 00:29:58.157 00:29:58.157 ' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.157 --rc genhtml_branch_coverage=1 00:29:58.157 --rc genhtml_function_coverage=1 00:29:58.157 --rc genhtml_legend=1 00:29:58.157 --rc geninfo_all_blocks=1 00:29:58.157 --rc geninfo_unexecuted_blocks=1 00:29:58.157 00:29:58.157 ' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:58.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:58.157 --rc genhtml_branch_coverage=1 00:29:58.157 --rc genhtml_function_coverage=1 00:29:58.157 --rc genhtml_legend=1 00:29:58.157 --rc geninfo_all_blocks=1 00:29:58.157 --rc geninfo_unexecuted_blocks=1 00:29:58.157 00:29:58.157 ' 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:58.157 12:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.157 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:58.158 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:29:58.158 12:13:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.299 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:06.300 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:06.300 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:06.300 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:06.300 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@247 -- # create_target_ns 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:06.300 10.0.0.1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:06.300 10.0.0.2 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:06.300 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:06.301 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.301 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.666 ms 00:30:06.301 00:30:06.301 --- 10.0.0.1 ping statistics --- 00:30:06.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.301 rtt min/avg/max/mdev = 0.666/0.666/0.666/0.000 ms 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:06.301 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.301 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:30:06.301 00:30:06.301 --- 10.0.0.2 ping statistics --- 00:30:06.301 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.301 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:06.301 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:30:06.302 ' 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=1476491 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 1476491 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1476491 ']' 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.302 12:13:30 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:06.302 [2024-12-05 12:13:30.775836] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:30:06.302 [2024-12-05 12:13:30.775901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.302 [2024-12-05 12:13:30.874115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:06.302 [2024-12-05 12:13:30.925952] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.302 [2024-12-05 12:13:30.926003] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.302 [2024-12-05 12:13:30.926011] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.302 [2024-12-05 12:13:30.926019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.302 [2024-12-05 12:13:30.926025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.302 [2024-12-05 12:13:30.928163] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.302 [2024-12-05 12:13:30.928307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.302 [2024-12-05 12:13:30.928309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.564 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.564 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:06.564 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:06.564 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.564 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:06.826 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.826 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:06.826 [2024-12-05 12:13:31.793750] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:06.826 12:13:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:07.086 Malloc0 00:30:07.086 12:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:07.347 12:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:07.608 12:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:07.608 [2024-12-05 12:13:32.621933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:07.608 12:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:07.870 [2024-12-05 12:13:32.822512] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:07.870 12:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:08.132 [2024-12-05 12:13:33.027285] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1477087 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1477087 /var/tmp/bdevperf.sock 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1477087 ']' 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:08.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:08.132 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:09.076 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.076 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:09.076 12:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:09.336 NVMe0n1 00:30:09.336 12:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:09.597 00:30:09.597 12:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1477308 00:30:09.597 12:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:09.597 12:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:10.540 12:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.953 12:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:14.273 12:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:14.273 00:30:14.273 12:13:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:14.273 [2024-12-05 12:13:39.139763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.273 [2024-12-05 12:13:39.139805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.273 [2024-12-05 12:13:39.139811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.273 [2024-12-05 12:13:39.139816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139831] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139868] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139886] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139909] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139940] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139987] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.139996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140000] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140023] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140042] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 [2024-12-05 12:13:39.140047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c94cc0 is same with the state(6) to be set 00:30:14.274 12:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:17.575 12:13:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.575 [2024-12-05 12:13:42.332586] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.575 12:13:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:18.519 12:13:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:18.519 [2024-12-05 12:13:43.524266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524315] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524343] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524348] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524362] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524380] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524442] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524465] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524517] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524535] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 [2024-12-05 12:13:43.524624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b5a480 is same with the state(6) to be set 00:30:18.519 12:13:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1477308 00:30:25.109 { 00:30:25.109 "results": [ 00:30:25.109 { 00:30:25.109 "job": "NVMe0n1", 00:30:25.109 "core_mask": "0x1", 00:30:25.109 "workload": "verify", 00:30:25.109 "status": "finished", 00:30:25.109 "verify_range": { 00:30:25.109 "start": 0, 00:30:25.109 "length": 16384 00:30:25.109 }, 00:30:25.109 "queue_depth": 128, 00:30:25.109 "io_size": 4096, 00:30:25.109 "runtime": 15.007764, 00:30:25.109 "iops": 12386.122276443046, 00:30:25.109 "mibps": 48.38329014235565, 00:30:25.109 "io_failed": 9012, 00:30:25.109 "io_timeout": 0, 00:30:25.109 "avg_latency_us": 9834.766746160423, 00:30:25.109 "min_latency_us": 546.1333333333333, 00:30:25.109 "max_latency_us": 19770.02666666667 00:30:25.109 } 00:30:25.109 ], 00:30:25.109 "core_count": 1 00:30:25.109 } 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1477087 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1477087 ']' 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1477087 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1477087 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1477087' 00:30:25.109 killing process with pid 1477087 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1477087 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1477087 00:30:25.109 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:25.109 [2024-12-05 12:13:33.117824] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:30:25.109 [2024-12-05 12:13:33.117910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1477087 ] 00:30:25.109 [2024-12-05 12:13:33.211096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.109 [2024-12-05 12:13:33.264061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:25.109 Running I/O for 15 seconds... 00:30:25.109 11113.00 IOPS, 43.41 MiB/s [2024-12-05T11:13:50.158Z] [2024-12-05 12:13:35.691219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:95112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.109 [2024-12-05 12:13:35.691261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:95128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:95136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:95144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:95152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:95160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:95168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:95176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:95184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:95192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:95200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:95208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:95216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:95224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:95232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:95240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:95248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:95256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:95264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:95272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:95280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:95288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.109 [2024-12-05 12:13:35.691638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:95296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.109 [2024-12-05 12:13:35.691645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:95304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:95312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:95320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:95328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:95336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:95344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:95352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:95360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:95368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:95376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:95384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:95392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:95400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:95408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:95416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:95424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:95440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:95448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:95456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.691990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.691999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:95464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:95472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:95480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:95488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:95120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.110 [2024-12-05 12:13:35.692075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:95496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:95504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:95512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:95520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:95528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:95536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:95544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:95552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:95560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:95568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:95576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:95584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:95592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:95600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.110 [2024-12-05 12:13:35.692309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.110 [2024-12-05 12:13:35.692318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:95608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:95616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:95624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:95632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:95640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:95648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:95656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:95664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:95672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:95680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:95688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:95696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:95704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:95712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:95720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:95728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:95736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:95744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.111 [2024-12-05 12:13:35.692615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95752 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692668] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95760 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692689] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692695] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95768 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95776 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692749] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95784 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692770] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692775] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95792 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692796] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95800 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692823] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692828] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95808 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95816 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692876] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95824 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692908] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95832 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692929] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692935] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95840 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.111 [2024-12-05 12:13:35.692970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95848 len:8 PRP1 0x0 PRP2 0x0 00:30:25.111 [2024-12-05 12:13:35.692977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.111 [2024-12-05 12:13:35.692984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.111 [2024-12-05 12:13:35.692990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.692996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95856 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693017] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95864 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95872 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693070] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95880 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95888 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693123] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95896 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693148] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95904 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95912 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693198] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95920 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95928 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95936 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693277] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95944 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95952 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95960 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95968 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693391] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95976 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95984 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693438] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693444] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95992 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693470] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693475] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96000 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96008 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693523] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693528] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96016 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693555] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96024 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693576] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96032 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.693596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.112 [2024-12-05 12:13:35.693604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.112 [2024-12-05 12:13:35.693609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.112 [2024-12-05 12:13:35.693615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96040 len:8 PRP1 0x0 PRP2 0x0 00:30:25.112 [2024-12-05 12:13:35.703985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704018] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704026] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96048 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96056 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96064 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96072 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704131] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704136] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96080 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96088 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704189] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96096 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96104 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96112 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96120 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704296] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.113 [2024-12-05 12:13:35.704301] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.113 [2024-12-05 12:13:35.704307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96128 len:8 PRP1 0x0 PRP2 0x0 00:30:25.113 [2024-12-05 12:13:35.704314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704356] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:25.113 [2024-12-05 12:13:35.704386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.113 [2024-12-05 12:13:35.704395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.113 [2024-12-05 12:13:35.704412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.113 [2024-12-05 12:13:35.704428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.113 [2024-12-05 12:13:35.704446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:35.704463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:30:25.113 [2024-12-05 12:13:35.704496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1969da0 (9): Bad file descriptor 00:30:25.113 [2024-12-05 12:13:35.708018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:30:25.113 [2024-12-05 12:13:35.738906] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:30:25.113 11005.00 IOPS, 42.99 MiB/s [2024-12-05T11:13:50.162Z] 11249.00 IOPS, 43.94 MiB/s [2024-12-05T11:13:50.162Z] 11469.50 IOPS, 44.80 MiB/s [2024-12-05T11:13:50.162Z] [2024-12-05 12:13:39.140449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:38352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:38368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:38376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:38384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:38392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:38408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.113 [2024-12-05 12:13:39.140572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.113 [2024-12-05 12:13:39.140579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:38432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:38448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:38456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:38464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:38472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:38480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:38496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:38504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:38520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:38528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:38536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:38544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:38560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:38568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:38584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:38624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:38640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:38656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:38664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:38672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:38680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:38688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.140991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:38696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.140996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.141002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:38704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.141007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.141014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:38712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.141019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.141026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.114 [2024-12-05 12:13:39.141030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.141037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:38744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.114 [2024-12-05 12:13:39.141042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.114 [2024-12-05 12:13:39.141050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:38752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:38760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:38768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:38776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:38784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:38800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:38808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:38816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:38824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:38856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:38864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:38872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:38880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:38888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:38896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:38904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:38912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:38920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:38928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:38936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:38944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:38952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:38960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:38976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:38984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:38992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:39024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:39032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.115 [2024-12-05 12:13:39.141509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.115 [2024-12-05 12:13:39.141515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:39104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.116 [2024-12-05 12:13:39.141588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141609] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39128 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141632] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39136 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39144 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141668] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39152 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141686] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39160 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39168 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141723] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141727] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39176 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141741] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141745] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39184 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141763] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39192 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39200 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39208 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141818] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39216 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39224 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39232 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141872] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39240 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141888] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141892] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39248 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141912] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39256 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141926] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141930] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39264 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141947] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141951] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39272 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.116 [2024-12-05 12:13:39.141966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.116 [2024-12-05 12:13:39.141969] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.116 [2024-12-05 12:13:39.141973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39280 len:8 PRP1 0x0 PRP2 0x0 00:30:25.116 [2024-12-05 12:13:39.141978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.141984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.141987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.141991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39288 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.141996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.142002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.142006] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.142010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39296 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.142015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.142020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.142024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39304 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39312 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153123] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153126] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39320 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39328 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39336 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39344 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153199] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39352 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153221] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39360 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153239] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:39368 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38728 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.117 [2024-12-05 12:13:39.153278] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.117 [2024-12-05 12:13:39.153282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:38736 len:8 PRP1 0x0 PRP2 0x0 00:30:25.117 [2024-12-05 12:13:39.153287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153323] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:25.117 [2024-12-05 12:13:39.153349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.117 [2024-12-05 12:13:39.153355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.117 [2024-12-05 12:13:39.153368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153373] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.117 [2024-12-05 12:13:39.153378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153384] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.117 [2024-12-05 12:13:39.153389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:39.153394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:30:25.117 [2024-12-05 12:13:39.153427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1969da0 (9): Bad file descriptor 00:30:25.117 [2024-12-05 12:13:39.155860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:30:25.117 [2024-12-05 12:13:39.183879] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:30:25.117 11660.80 IOPS, 45.55 MiB/s [2024-12-05T11:13:50.166Z] 11859.17 IOPS, 46.32 MiB/s [2024-12-05T11:13:50.166Z] 11991.14 IOPS, 46.84 MiB/s [2024-12-05T11:13:50.166Z] 12097.50 IOPS, 47.26 MiB/s [2024-12-05T11:13:50.166Z] [2024-12-05 12:13:43.527159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:25.117 [2024-12-05 12:13:43.527187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:102776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:102784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:102792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:102816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.117 [2024-12-05 12:13:43.527310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:102848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.117 [2024-12-05 12:13:43.527315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:102856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:102872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:103032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:103040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.118 [2024-12-05 12:13:43.527734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.118 [2024-12-05 12:13:43.527740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.527993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.527999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:103368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:103384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:103392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:103400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:103408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:103432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:103448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.119 [2024-12-05 12:13:43.528190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.119 [2024-12-05 12:13:43.528195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:103464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.120 [2024-12-05 12:13:43.528206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:103472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:25.120 [2024-12-05 12:13:43.528218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103480 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528252] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103488 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528275] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103496 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528289] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528293] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103504 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528310] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103512 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103520 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103528 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528369] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103536 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528383] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103544 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528405] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103552 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103560 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103568 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528465] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103576 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528484] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103584 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528498] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103592 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528521] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103600 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528540] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103608 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528555] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103616 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528574] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528577] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103624 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528597] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103632 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528613] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103640 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103648 len:8 PRP1 0x0 PRP2 0x0 00:30:25.120 [2024-12-05 12:13:43.528645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.120 [2024-12-05 12:13:43.528651] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.120 [2024-12-05 12:13:43.528655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.120 [2024-12-05 12:13:43.528660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103656 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.528665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.528671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.528675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.528679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103664 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.528684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.528690] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.528694] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.528698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103672 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.528703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.528708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.528712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.528716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103680 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539092] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103688 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539120] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539124] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103696 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103704 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103712 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103720 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539200] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103728 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539215] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539220] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103736 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539238] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103744 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103752 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539273] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103760 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539292] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539296] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103768 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103776 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:25.121 [2024-12-05 12:13:43.539333] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:25.121 [2024-12-05 12:13:43.539337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:103784 len:8 PRP1 0x0 PRP2 0x0 00:30:25.121 [2024-12-05 12:13:43.539342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539378] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:25.121 [2024-12-05 12:13:43.539402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.121 [2024-12-05 12:13:43.539408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.121 [2024-12-05 12:13:43.539420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.121 [2024-12-05 12:13:43.539431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:25.121 [2024-12-05 12:13:43.539442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:25.121 [2024-12-05 12:13:43.539447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:30:25.121 [2024-12-05 12:13:43.539474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1969da0 (9): Bad file descriptor 00:30:25.121 [2024-12-05 12:13:43.541903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:30:25.121 12031.11 IOPS, 47.00 MiB/s [2024-12-05T11:13:50.170Z] [2024-12-05 12:13:43.656207] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:30:25.121 12114.00 IOPS, 47.32 MiB/s [2024-12-05T11:13:50.170Z] 12188.36 IOPS, 47.61 MiB/s [2024-12-05T11:13:50.170Z] 12260.33 IOPS, 47.89 MiB/s [2024-12-05T11:13:50.170Z] 12307.38 IOPS, 48.08 MiB/s [2024-12-05T11:13:50.170Z] 12356.86 IOPS, 48.27 MiB/s 00:30:25.121 Latency(us) 00:30:25.121 [2024-12-05T11:13:50.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.121 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:25.121 Verification LBA range: start 0x0 length 0x4000 00:30:25.121 NVMe0n1 : 15.01 12386.12 48.38 600.49 0.00 9834.77 546.13 19770.03 00:30:25.121 [2024-12-05T11:13:50.170Z] =================================================================================================================== 00:30:25.121 [2024-12-05T11:13:50.170Z] Total : 12386.12 48.38 600.49 0.00 9834.77 546.13 19770.03 00:30:25.121 Received shutdown signal, test time was about 15.000000 seconds 00:30:25.121 00:30:25.121 Latency(us) 00:30:25.121 [2024-12-05T11:13:50.170Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:25.121 [2024-12-05T11:13:50.170Z] =================================================================================================================== 00:30:25.121 [2024-12-05T11:13:50.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1480213 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1480213 /var/tmp/bdevperf.sock 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1480213 ']' 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:25.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.122 12:13:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:25.693 12:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.693 12:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:30:25.693 12:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:25.954 [2024-12-05 12:13:50.875751] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:25.954 12:13:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:26.213 [2024-12-05 12:13:51.060198] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:26.213 12:13:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:26.472 NVMe0n1 00:30:26.472 12:13:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:26.731 00:30:26.731 12:13:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:30:26.991 00:30:26.991 12:13:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:26.991 12:13:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:27.251 12:13:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:27.251 12:13:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:30.552 12:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:30.552 12:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:30.552 12:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1481381 00:30:30.552 12:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:30.552 12:13:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1481381 00:30:31.496 { 00:30:31.496 "results": [ 00:30:31.496 { 00:30:31.496 "job": "NVMe0n1", 00:30:31.496 "core_mask": "0x1", 00:30:31.496 "workload": "verify", 00:30:31.496 "status": "finished", 00:30:31.496 "verify_range": { 00:30:31.496 "start": 0, 00:30:31.496 "length": 16384 00:30:31.496 }, 00:30:31.496 "queue_depth": 128, 00:30:31.496 "io_size": 4096, 00:30:31.496 "runtime": 1.008714, 00:30:31.496 "iops": 12808.387709499422, 00:30:31.496 "mibps": 50.03276449023212, 00:30:31.496 "io_failed": 0, 00:30:31.496 "io_timeout": 0, 00:30:31.496 "avg_latency_us": 9953.80626625387, 00:30:31.496 "min_latency_us": 1843.2, 00:30:31.496 "max_latency_us": 8628.906666666666 00:30:31.496 } 00:30:31.496 ], 00:30:31.496 "core_count": 1 00:30:31.496 } 00:30:31.496 12:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:31.496 [2024-12-05 12:13:49.918648] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:30:31.496 [2024-12-05 12:13:49.918708] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480213 ] 00:30:31.496 [2024-12-05 12:13:50.004628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:31.496 [2024-12-05 12:13:50.036096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.496 [2024-12-05 12:13:52.202003] bdev_nvme.c:2056:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:31.496 [2024-12-05 12:13:52.202041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.496 [2024-12-05 12:13:52.202049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.496 [2024-12-05 12:13:52.202056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.496 [2024-12-05 12:13:52.202061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.496 [2024-12-05 12:13:52.202067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.496 [2024-12-05 12:13:52.202072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.496 [2024-12-05 12:13:52.202078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:31.496 [2024-12-05 12:13:52.202083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:31.496 [2024-12-05 12:13:52.202088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:30:31.496 [2024-12-05 12:13:52.202109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:30:31.496 [2024-12-05 12:13:52.202120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1280da0 (9): Bad file descriptor 00:30:31.496 [2024-12-05 12:13:52.213329] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:30:31.496 Running I/O for 1 seconds... 00:30:31.496 12776.00 IOPS, 49.91 MiB/s 00:30:31.496 Latency(us) 00:30:31.496 [2024-12-05T11:13:56.545Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:31.496 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:31.496 Verification LBA range: start 0x0 length 0x4000 00:30:31.496 NVMe0n1 : 1.01 12808.39 50.03 0.00 0.00 9953.81 1843.20 8628.91 00:30:31.496 [2024-12-05T11:13:56.545Z] =================================================================================================================== 00:30:31.496 [2024-12-05T11:13:56.545Z] Total : 12808.39 50.03 0.00 0.00 9953.81 1843.20 8628.91 00:30:31.496 12:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:31.496 12:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:31.757 12:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.018 12:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:32.018 12:13:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:32.279 12:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:32.279 12:13:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1480213 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1480213 ']' 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1480213 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1480213 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1480213' 00:30:35.579 killing process with pid 1480213 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1480213 00:30:35.579 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1480213 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:35.840 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:35.840 rmmod nvme_tcp 00:30:35.840 rmmod nvme_fabrics 00:30:35.840 rmmod nvme_keyring 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 1476491 ']' 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 1476491 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1476491 ']' 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1476491 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1476491 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1476491' 00:30:36.101 killing process with pid 1476491 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1476491 00:30:36.101 12:14:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1476491 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:36.101 12:14:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:30:38.649 00:30:38.649 real 0m40.385s 00:30:38.649 user 2m3.386s 00:30:38.649 sys 0m9.004s 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:38.649 ************************************ 00:30:38.649 END TEST nvmf_failover 00:30:38.649 ************************************ 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:38.649 ************************************ 00:30:38.649 START TEST nvmf_host_discovery 00:30:38.649 ************************************ 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:38.649 * Looking for test storage... 00:30:38.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:30:38.649 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:38.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.650 --rc genhtml_branch_coverage=1 00:30:38.650 --rc genhtml_function_coverage=1 00:30:38.650 --rc genhtml_legend=1 00:30:38.650 --rc geninfo_all_blocks=1 00:30:38.650 --rc geninfo_unexecuted_blocks=1 00:30:38.650 00:30:38.650 ' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:38.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.650 --rc genhtml_branch_coverage=1 00:30:38.650 --rc genhtml_function_coverage=1 00:30:38.650 --rc genhtml_legend=1 00:30:38.650 --rc geninfo_all_blocks=1 00:30:38.650 --rc geninfo_unexecuted_blocks=1 00:30:38.650 00:30:38.650 ' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:38.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.650 --rc genhtml_branch_coverage=1 00:30:38.650 --rc genhtml_function_coverage=1 00:30:38.650 --rc genhtml_legend=1 00:30:38.650 --rc geninfo_all_blocks=1 00:30:38.650 --rc geninfo_unexecuted_blocks=1 00:30:38.650 00:30:38.650 ' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:38.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:38.650 --rc genhtml_branch_coverage=1 00:30:38.650 --rc genhtml_function_coverage=1 00:30:38.650 --rc genhtml_legend=1 00:30:38.650 --rc geninfo_all_blocks=1 00:30:38.650 --rc geninfo_unexecuted_blocks=1 00:30:38.650 00:30:38.650 ' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:38.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:30:38.650 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:38.651 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:38.651 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:38.651 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:38.651 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:38.651 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:30:38.651 12:14:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:46.790 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:46.790 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:46.790 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:46.790 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:46.790 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:46.791 10.0.0.1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:46.791 10.0.0.2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:46.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:46.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.687 ms 00:30:46.791 00:30:46.791 --- 10.0.0.1 ping statistics --- 00:30:46.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.791 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:46.791 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:46.792 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:46.792 12:14:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:46.792 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:46.792 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:30:46.792 00:30:46.792 --- 10.0.0.2 ping statistics --- 00:30:46.792 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:46.792 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:30:46.792 ' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=1486604 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 1486604 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1486604 ']' 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:46.792 12:14:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:46.792 [2024-12-05 12:14:11.175931] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:30:46.792 [2024-12-05 12:14:11.175994] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.792 [2024-12-05 12:14:11.273731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:46.792 [2024-12-05 12:14:11.324474] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.792 [2024-12-05 12:14:11.324522] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.792 [2024-12-05 12:14:11.324531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:46.792 [2024-12-05 12:14:11.324538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:46.792 [2024-12-05 12:14:11.324544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.792 [2024-12-05 12:14:11.325284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.053 [2024-12-05 12:14:12.052677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.053 [2024-12-05 12:14:12.064922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.053 null0 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.053 null1 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:47.053 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1486916 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1486916 /tmp/host.sock 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1486916 ']' 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:47.314 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:47.314 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:47.314 [2024-12-05 12:14:12.161505] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:30:47.314 [2024-12-05 12:14:12.161565] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486916 ] 00:30:47.314 [2024-12-05 12:14:12.254316] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:47.314 [2024-12-05 12:14:12.308261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.258 12:14:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.258 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.259 [2024-12-05 12:14:13.292104] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.259 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:48.519 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:30:48.520 12:14:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:30:49.091 [2024-12-05 12:14:14.041360] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:49.091 [2024-12-05 12:14:14.041380] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:49.091 [2024-12-05 12:14:14.041394] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:49.351 [2024-12-05 12:14:14.168806] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:49.351 [2024-12-05 12:14:14.269662] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:30:49.351 [2024-12-05 12:14:14.270617] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14c5670:1 started. 00:30:49.351 [2024-12-05 12:14:14.272248] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:49.351 [2024-12-05 12:14:14.272265] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:49.351 [2024-12-05 12:14:14.280381] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14c5670 was disconnected and freed. delete nvme_qpair. 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:49.611 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:49.872 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:50.133 [2024-12-05 12:14:14.949404] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14d2820:1 started. 00:30:50.133 [2024-12-05 12:14:14.952986] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14d2820 was disconnected and freed. delete nvme_qpair. 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.133 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.134 12:14:14 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.134 [2024-12-05 12:14:15.040788] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:50.134 [2024-12-05 12:14:15.040984] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:50.134 [2024-12-05 12:14:15.041005] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:50.134 [2024-12-05 12:14:15.167841] bdev_nvme.c:7435:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:50.134 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.394 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:50.394 12:14:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:30:50.394 [2024-12-05 12:14:15.226583] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:30:50.394 [2024-12-05 12:14:15.226621] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:50.394 [2024-12-05 12:14:15.226629] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:50.394 [2024-12-05 12:14:15.226634] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.333 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.334 [2024-12-05 12:14:16.316724] bdev_nvme.c:7493:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:51.334 [2024-12-05 12:14:16.316746] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:51.334 [2024-12-05 12:14:16.324229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.334 [2024-12-05 12:14:16.324249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.334 [2024-12-05 12:14:16.324259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.334 [2024-12-05 12:14:16.324267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.334 [2024-12-05 12:14:16.324275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.334 [2024-12-05 12:14:16.324282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.334 [2024-12-05 12:14:16.324290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.334 [2024-12-05 12:14:16.324297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.334 [2024-12-05 12:14:16.324305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.334 [2024-12-05 12:14:16.334247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.334 [2024-12-05 12:14:16.344282] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:51.334 [2024-12-05 12:14:16.344294] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:51.334 [2024-12-05 12:14:16.344301] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:51.334 [2024-12-05 12:14:16.344310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.334 [2024-12-05 12:14:16.344328] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:51.334 [2024-12-05 12:14:16.344781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.334 [2024-12-05 12:14:16.344819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495c50 with addr=10.0.0.2, port=4420 00:30:51.334 [2024-12-05 12:14:16.344830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.334 [2024-12-05 12:14:16.344849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.334 [2024-12-05 12:14:16.344875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:51.334 [2024-12-05 12:14:16.344883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:51.334 [2024-12-05 12:14:16.344892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:51.334 [2024-12-05 12:14:16.344904] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:51.334 [2024-12-05 12:14:16.344910] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:51.334 [2024-12-05 12:14:16.344915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:51.334 [2024-12-05 12:14:16.354361] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:51.334 [2024-12-05 12:14:16.354375] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:51.334 [2024-12-05 12:14:16.354380] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:51.334 [2024-12-05 12:14:16.354385] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.334 [2024-12-05 12:14:16.354401] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:51.334 [2024-12-05 12:14:16.354839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.334 [2024-12-05 12:14:16.354877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495c50 with addr=10.0.0.2, port=4420 00:30:51.334 [2024-12-05 12:14:16.354888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.334 [2024-12-05 12:14:16.354906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.334 [2024-12-05 12:14:16.354931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:51.334 [2024-12-05 12:14:16.354940] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:51.334 [2024-12-05 12:14:16.354948] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:51.334 [2024-12-05 12:14:16.354955] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:51.334 [2024-12-05 12:14:16.354960] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:51.334 [2024-12-05 12:14:16.354965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:51.334 [2024-12-05 12:14:16.364434] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:51.334 [2024-12-05 12:14:16.364450] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:51.334 [2024-12-05 12:14:16.364459] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:51.334 [2024-12-05 12:14:16.364464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.334 [2024-12-05 12:14:16.364481] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:51.334 [2024-12-05 12:14:16.364790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.334 [2024-12-05 12:14:16.364803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495c50 with addr=10.0.0.2, port=4420 00:30:51.334 [2024-12-05 12:14:16.364811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.334 [2024-12-05 12:14:16.364823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.334 [2024-12-05 12:14:16.364833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:51.334 [2024-12-05 12:14:16.364840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:51.334 [2024-12-05 12:14:16.364852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:51.334 [2024-12-05 12:14:16.364859] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:51.334 [2024-12-05 12:14:16.364864] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:51.334 [2024-12-05 12:14:16.364869] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:51.334 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:51.335 [2024-12-05 12:14:16.374513] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:51.335 [2024-12-05 12:14:16.374526] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:51.335 [2024-12-05 12:14:16.374531] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:51.335 [2024-12-05 12:14:16.374535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.335 [2024-12-05 12:14:16.374550] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:51.335 [2024-12-05 12:14:16.374839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.335 [2024-12-05 12:14:16.374851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495c50 with addr=10.0.0.2, port=4420 00:30:51.335 [2024-12-05 12:14:16.374859] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.335 [2024-12-05 12:14:16.374870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.335 [2024-12-05 12:14:16.374880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:51.335 [2024-12-05 12:14:16.374886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:51.335 [2024-12-05 12:14:16.374893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:51.335 [2024-12-05 12:14:16.374899] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:51.335 [2024-12-05 12:14:16.374904] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:51.335 [2024-12-05 12:14:16.374909] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.335 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:51.596 [2024-12-05 12:14:16.384578] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:51.596 [2024-12-05 12:14:16.384588] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:51.596 [2024-12-05 12:14:16.384592] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:51.596 [2024-12-05 12:14:16.384595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.596 [2024-12-05 12:14:16.384606] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:51.596 [2024-12-05 12:14:16.384888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.596 [2024-12-05 12:14:16.384898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495c50 with addr=10.0.0.2, port=4420 00:30:51.597 [2024-12-05 12:14:16.384903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.597 [2024-12-05 12:14:16.384910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.597 [2024-12-05 12:14:16.384918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:51.597 [2024-12-05 12:14:16.384922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:51.597 [2024-12-05 12:14:16.384928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:51.597 [2024-12-05 12:14:16.384933] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:51.597 [2024-12-05 12:14:16.384936] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:51.597 [2024-12-05 12:14:16.384939] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:51.597 [2024-12-05 12:14:16.394634] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:30:51.597 [2024-12-05 12:14:16.394643] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:30:51.597 [2024-12-05 12:14:16.394646] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:30:51.597 [2024-12-05 12:14:16.394649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:51.597 [2024-12-05 12:14:16.394659] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:30:51.597 [2024-12-05 12:14:16.394942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.597 [2024-12-05 12:14:16.394950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1495c50 with addr=10.0.0.2, port=4420 00:30:51.597 [2024-12-05 12:14:16.394955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1495c50 is same with the state(6) to be set 00:30:51.597 [2024-12-05 12:14:16.394962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1495c50 (9): Bad file descriptor 00:30:51.597 [2024-12-05 12:14:16.394969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:30:51.597 [2024-12-05 12:14:16.394974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:30:51.597 [2024-12-05 12:14:16.394979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:30:51.597 [2024-12-05 12:14:16.394983] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:30:51.597 [2024-12-05 12:14:16.394989] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:30:51.597 [2024-12-05 12:14:16.394992] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:30:51.597 [2024-12-05 12:14:16.403264] bdev_nvme.c:7298:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:51.597 [2024-12-05 12:14:16.403278] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.597 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:51.598 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.858 12:14:16 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.801 [2024-12-05 12:14:17.752350] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:52.801 [2024-12-05 12:14:17.752363] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:52.801 [2024-12-05 12:14:17.752372] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:52.801 [2024-12-05 12:14:17.841628] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:53.063 [2024-12-05 12:14:17.903192] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:30:53.063 [2024-12-05 12:14:17.903849] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x14934a0:1 started. 00:30:53.063 [2024-12-05 12:14:17.905199] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:53.063 [2024-12-05 12:14:17.905219] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:53.063 [2024-12-05 12:14:17.909222] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x14934a0 was disconnected and freed. delete nvme_qpair. 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.063 request: 00:30:53.063 { 00:30:53.063 "name": "nvme", 00:30:53.063 "trtype": "tcp", 00:30:53.063 "traddr": "10.0.0.2", 00:30:53.063 "adrfam": "ipv4", 00:30:53.063 "trsvcid": "8009", 00:30:53.063 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:53.063 "wait_for_attach": true, 00:30:53.063 "method": "bdev_nvme_start_discovery", 00:30:53.063 "req_id": 1 00:30:53.063 } 00:30:53.063 Got JSON-RPC error response 00:30:53.063 response: 00:30:53.063 { 00:30:53.063 "code": -17, 00:30:53.063 "message": "File exists" 00:30:53.063 } 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.063 12:14:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.063 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.063 request: 00:30:53.063 { 00:30:53.063 "name": "nvme_second", 00:30:53.063 "trtype": "tcp", 00:30:53.063 "traddr": "10.0.0.2", 00:30:53.063 "adrfam": "ipv4", 00:30:53.063 "trsvcid": "8009", 00:30:53.063 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:53.063 "wait_for_attach": true, 00:30:53.089 "method": "bdev_nvme_start_discovery", 00:30:53.089 "req_id": 1 00:30:53.089 } 00:30:53.089 Got JSON-RPC error response 00:30:53.089 response: 00:30:53.089 { 00:30:53.089 "code": -17, 00:30:53.089 "message": "File exists" 00:30:53.089 } 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.089 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:53.351 12:14:18 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.294 [2024-12-05 12:14:19.164500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:54.294 [2024-12-05 12:14:19.164523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d4650 with addr=10.0.0.2, port=8010 00:30:54.294 [2024-12-05 12:14:19.164533] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:54.294 [2024-12-05 12:14:19.164539] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:54.294 [2024-12-05 12:14:19.164544] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:55.236 [2024-12-05 12:14:20.167003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:55.236 [2024-12-05 12:14:20.167029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14d4650 with addr=10.0.0.2, port=8010 00:30:55.236 [2024-12-05 12:14:20.167039] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:55.236 [2024-12-05 12:14:20.167044] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:55.236 [2024-12-05 12:14:20.167050] bdev_nvme.c:7579:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:30:56.177 [2024-12-05 12:14:21.168978] bdev_nvme.c:7554:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:30:56.177 request: 00:30:56.177 { 00:30:56.177 "name": "nvme_second", 00:30:56.177 "trtype": "tcp", 00:30:56.177 "traddr": "10.0.0.2", 00:30:56.177 "adrfam": "ipv4", 00:30:56.177 "trsvcid": "8010", 00:30:56.177 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:56.177 "wait_for_attach": false, 00:30:56.177 "attach_timeout_ms": 3000, 00:30:56.177 "method": "bdev_nvme_start_discovery", 00:30:56.177 "req_id": 1 00:30:56.177 } 00:30:56.177 Got JSON-RPC error response 00:30:56.177 response: 00:30:56.177 { 00:30:56.177 "code": -110, 00:30:56.177 "message": "Connection timed out" 00:30:56.177 } 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:30:56.177 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:30:56.438 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1486916 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:56.439 rmmod nvme_tcp 00:30:56.439 rmmod nvme_fabrics 00:30:56.439 rmmod nvme_keyring 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 1486604 ']' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 1486604 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1486604 ']' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1486604 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1486604 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1486604' 00:30:56.439 killing process with pid 1486604 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1486604 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1486604 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:56.439 12:14:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:30:58.986 00:30:58.986 real 0m20.297s 00:30:58.986 user 0m23.407s 00:30:58.986 sys 0m7.236s 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.986 ************************************ 00:30:58.986 END TEST nvmf_host_discovery 00:30:58.986 ************************************ 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:58.986 ************************************ 00:30:58.986 START TEST nvmf_host_multipath_status 00:30:58.986 ************************************ 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:30:58.986 * Looking for test storage... 00:30:58.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:30:58.986 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.987 --rc genhtml_branch_coverage=1 00:30:58.987 --rc genhtml_function_coverage=1 00:30:58.987 --rc genhtml_legend=1 00:30:58.987 --rc geninfo_all_blocks=1 00:30:58.987 --rc geninfo_unexecuted_blocks=1 00:30:58.987 00:30:58.987 ' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.987 --rc genhtml_branch_coverage=1 00:30:58.987 --rc genhtml_function_coverage=1 00:30:58.987 --rc genhtml_legend=1 00:30:58.987 --rc geninfo_all_blocks=1 00:30:58.987 --rc geninfo_unexecuted_blocks=1 00:30:58.987 00:30:58.987 ' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.987 --rc genhtml_branch_coverage=1 00:30:58.987 --rc genhtml_function_coverage=1 00:30:58.987 --rc genhtml_legend=1 00:30:58.987 --rc geninfo_all_blocks=1 00:30:58.987 --rc geninfo_unexecuted_blocks=1 00:30:58.987 00:30:58.987 ' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:58.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:58.987 --rc genhtml_branch_coverage=1 00:30:58.987 --rc genhtml_function_coverage=1 00:30:58.987 --rc genhtml_legend=1 00:30:58.987 --rc geninfo_all_blocks=1 00:30:58.987 --rc geninfo_unexecuted_blocks=1 00:30:58.987 00:30:58.987 ' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:58.987 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:58.987 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:30:58.988 12:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:07.132 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:07.133 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:07.133 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:07.133 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:07.133 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@247 -- # create_target_ns 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:07.133 10.0.0.1 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:07.133 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:07.134 10.0.0.2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:07.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:07.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.630 ms 00:31:07.134 00:31:07.134 --- 10.0.0.1 ping statistics --- 00:31:07.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.134 rtt min/avg/max/mdev = 0.630/0.630/0.630/0.000 ms 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:07.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:07.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:31:07.134 00:31:07.134 --- 10.0.0.2 ping statistics --- 00:31:07.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:07.134 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:07.134 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:07.135 ' 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=1493113 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 1493113 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1493113 ']' 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:07.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:07.135 12:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.135 [2024-12-05 12:14:31.604264] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:07.135 [2024-12-05 12:14:31.604328] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:07.135 [2024-12-05 12:14:31.704072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:07.135 [2024-12-05 12:14:31.756067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:07.135 [2024-12-05 12:14:31.756119] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:07.135 [2024-12-05 12:14:31.756127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:07.135 [2024-12-05 12:14:31.756135] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:07.135 [2024-12-05 12:14:31.756141] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:07.135 [2024-12-05 12:14:31.757815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:07.135 [2024-12-05 12:14:31.757821] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:07.396 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:07.396 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:07.396 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:07.396 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:07.396 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.657 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:07.657 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1493113 00:31:07.657 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:07.657 [2024-12-05 12:14:32.642368] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:07.657 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:07.918 Malloc0 00:31:07.918 12:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:08.180 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:08.440 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.440 [2024-12-05 12:14:33.479823] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:08.701 [2024-12-05 12:14:33.676481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1493478 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1493478 /var/tmp/bdevperf.sock 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1493478 ']' 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:08.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:08.701 12:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:09.645 12:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:09.645 12:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:31:09.645 12:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:09.907 12:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:10.167 Nvme0n1 00:31:10.167 12:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:10.427 Nvme0n1 00:31:10.427 12:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:10.427 12:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:12.984 12:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:12.984 12:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:12.985 12:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:12.985 12:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:13.926 12:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:13.926 12:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:13.926 12:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.926 12:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.187 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:14.447 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.447 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:14.447 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.447 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:14.708 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.708 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:14.708 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.708 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:14.969 12:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:15.231 12:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:15.491 12:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:16.432 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:16.432 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:16.432 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.432 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.692 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:16.953 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.953 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:16.953 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.953 12:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:17.214 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:17.476 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.476 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:17.476 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:17.736 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:17.736 12:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.122 12:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:19.122 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.122 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:19.122 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.122 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:19.382 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.382 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:19.382 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.382 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:19.641 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.641 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:19.641 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.641 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:19.902 12:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:20.163 12:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:20.422 12:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:21.364 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:21.364 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:21.364 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.364 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.624 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:21.885 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:21.885 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:21.885 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.885 12:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:22.145 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.145 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:22.145 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.145 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:22.145 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.145 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:22.407 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.407 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:22.407 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.407 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:22.407 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:22.668 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:22.929 12:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:23.871 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:23.871 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:23.871 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.871 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:24.133 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.133 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:24.133 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.133 12:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:24.133 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.133 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:24.133 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.133 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:24.394 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.394 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:24.394 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.394 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:24.656 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:24.917 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:24.917 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:24.917 12:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:25.178 12:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:25.439 12:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:26.382 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:26.382 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:26.382 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.382 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:26.382 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.382 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:26.383 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.383 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:26.643 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.643 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:26.644 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.644 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:26.904 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:26.904 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:26.904 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.904 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:27.165 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.165 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:27.165 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.165 12:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:27.165 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:27.165 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:27.165 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.165 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:27.426 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:27.426 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:27.687 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:27.687 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:27.687 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:27.947 12:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:28.889 12:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:28.889 12:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:28.889 12:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.889 12:14:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:29.151 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.151 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:29.151 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:29.151 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.413 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:29.673 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.673 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:29.673 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.673 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.933 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.933 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.933 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.933 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:30.193 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:30.193 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:30.194 12:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:30.194 12:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:30.454 12:14:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:31.454 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:31.454 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:31.454 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.454 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.764 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:32.090 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.090 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:32.090 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.090 12:14:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.090 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.090 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.090 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.090 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.352 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.352 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.352 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.352 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.612 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.612 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:32.612 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.612 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:32.872 12:14:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:33.812 12:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:33.812 12:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:33.812 12:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:33.812 12:14:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.072 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.072 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:34.072 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.072 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.332 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:34.591 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.591 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.591 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:34.591 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.591 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:34.851 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.851 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:34.851 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.851 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.110 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.110 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:35.110 12:14:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:35.110 12:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:35.370 12:15:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:36.306 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:36.306 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:36.306 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.306 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:36.564 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.564 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:36.564 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.564 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:36.823 12:15:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:37.081 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.081 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:37.081 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.081 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:37.340 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.340 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:37.340 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.340 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1493478 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1493478 ']' 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1493478 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493478 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493478' 00:31:37.605 killing process with pid 1493478 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1493478 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1493478 00:31:37.605 { 00:31:37.605 "results": [ 00:31:37.605 { 00:31:37.605 "job": "Nvme0n1", 00:31:37.605 "core_mask": "0x4", 00:31:37.605 "workload": "verify", 00:31:37.605 "status": "terminated", 00:31:37.605 "verify_range": { 00:31:37.605 "start": 0, 00:31:37.605 "length": 16384 00:31:37.605 }, 00:31:37.605 "queue_depth": 128, 00:31:37.605 "io_size": 4096, 00:31:37.605 "runtime": 26.894109, 00:31:37.605 "iops": 11909.039262092676, 00:31:37.605 "mibps": 46.519684617549515, 00:31:37.605 "io_failed": 0, 00:31:37.605 "io_timeout": 0, 00:31:37.605 "avg_latency_us": 10729.435598059634, 00:31:37.605 "min_latency_us": 453.97333333333336, 00:31:37.605 "max_latency_us": 3019898.88 00:31:37.605 } 00:31:37.605 ], 00:31:37.605 "core_count": 1 00:31:37.605 } 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1493478 00:31:37.605 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.605 [2024-12-05 12:14:33.754755] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:37.605 [2024-12-05 12:14:33.754861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493478 ] 00:31:37.605 [2024-12-05 12:14:33.850097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:37.605 [2024-12-05 12:14:33.900884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:37.605 Running I/O for 90 seconds... 00:31:37.605 10177.00 IOPS, 39.75 MiB/s [2024-12-05T11:15:02.654Z] 10777.50 IOPS, 42.10 MiB/s [2024-12-05T11:15:02.654Z] 10921.00 IOPS, 42.66 MiB/s [2024-12-05T11:15:02.654Z] 11011.00 IOPS, 43.01 MiB/s [2024-12-05T11:15:02.654Z] 11375.60 IOPS, 44.44 MiB/s [2024-12-05T11:15:02.654Z] 11640.00 IOPS, 45.47 MiB/s [2024-12-05T11:15:02.654Z] 11808.57 IOPS, 46.13 MiB/s [2024-12-05T11:15:02.654Z] 11950.62 IOPS, 46.68 MiB/s [2024-12-05T11:15:02.654Z] 12047.56 IOPS, 47.06 MiB/s [2024-12-05T11:15:02.654Z] 12144.10 IOPS, 47.44 MiB/s [2024-12-05T11:15:02.654Z] 12206.36 IOPS, 47.68 MiB/s [2024-12-05T11:15:02.654Z] [2024-12-05 12:14:47.541968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:126320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:126328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:126336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:126344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:126352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:126360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:126368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:126376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:126392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:126400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:126408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:126416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:126424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:126432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:126440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:126448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:126456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:126464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:126472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:126480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:126496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:126504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:126512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:126520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.605 [2024-12-05 12:14:47.542481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:37.605 [2024-12-05 12:14:47.542492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:126528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:126536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:126544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:126552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:126560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:126568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:126576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:126584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:126592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:126600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:126608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:126616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:126624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:126632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:126640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:126656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:126664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:126672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:126680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.542826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:126688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.542832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:126696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:126712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:126728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:126736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:126752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:126760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:126776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:126784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:126800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:126808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:126816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:126824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:126832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.543986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.543991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:126848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:126856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:126864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:126872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:126880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:126888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:126896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:126904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:126912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:126920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:126936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:126944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.606 [2024-12-05 12:14:47.544296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:126952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:126960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:126968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:126976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:126984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:127000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:127008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:127016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:127024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.606 [2024-12-05 12:14:47.544511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:37.606 [2024-12-05 12:14:47.544526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:127032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:14:47.544531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:14:47.544547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:14:47.544552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:14:47.544568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:127048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:14:47.544573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:14:47.544588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:14:47.544594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:14:47.544609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:127064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:14:47.544614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:14:47.544629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:127072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:14:47.544635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:37.607 12238.50 IOPS, 47.81 MiB/s [2024-12-05T11:15:02.656Z] 11297.08 IOPS, 44.13 MiB/s [2024-12-05T11:15:02.656Z] 10490.14 IOPS, 40.98 MiB/s [2024-12-05T11:15:02.656Z] 9815.33 IOPS, 38.34 MiB/s [2024-12-05T11:15:02.656Z] 10000.50 IOPS, 39.06 MiB/s [2024-12-05T11:15:02.656Z] 10183.65 IOPS, 39.78 MiB/s [2024-12-05T11:15:02.656Z] 10522.72 IOPS, 41.10 MiB/s [2024-12-05T11:15:02.656Z] 10860.37 IOPS, 42.42 MiB/s [2024-12-05T11:15:02.656Z] 11086.85 IOPS, 43.31 MiB/s [2024-12-05T11:15:02.656Z] 11180.19 IOPS, 43.67 MiB/s [2024-12-05T11:15:02.656Z] 11262.27 IOPS, 43.99 MiB/s [2024-12-05T11:15:02.656Z] 11457.87 IOPS, 44.76 MiB/s [2024-12-05T11:15:02.656Z] 11681.12 IOPS, 45.63 MiB/s [2024-12-05T11:15:02.656Z] [2024-12-05 12:15:00.278229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.278989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.278999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.279163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.279991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.279997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.280013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.280028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.280043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.280058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:37.607 [2024-12-05 12:15:00.280075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.607 [2024-12-05 12:15:00.280091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:37.607 [2024-12-05 12:15:00.280101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:37.608 [2024-12-05 12:15:00.280226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:37.608 [2024-12-05 12:15:00.280231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:37.608 11841.12 IOPS, 46.25 MiB/s [2024-12-05T11:15:02.657Z] 11880.15 IOPS, 46.41 MiB/s [2024-12-05T11:15:02.657Z] Received shutdown signal, test time was about 26.894772 seconds 00:31:37.608 00:31:37.608 Latency(us) 00:31:37.608 [2024-12-05T11:15:02.657Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.608 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:37.608 Verification LBA range: start 0x0 length 0x4000 00:31:37.608 Nvme0n1 : 26.89 11909.04 46.52 0.00 0.00 10729.44 453.97 3019898.88 00:31:37.608 [2024-12-05T11:15:02.657Z] =================================================================================================================== 00:31:37.608 [2024-12-05T11:15:02.657Z] Total : 11909.04 46.52 0.00 0.00 10729.44 453.97 3019898.88 00:31:37.608 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:37.867 rmmod nvme_tcp 00:31:37.867 rmmod nvme_fabrics 00:31:37.867 rmmod nvme_keyring 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 1493113 ']' 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 1493113 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1493113 ']' 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1493113 00:31:37.867 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:31:37.868 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:37.868 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1493113 00:31:38.127 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:38.127 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:38.128 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1493113' 00:31:38.128 killing process with pid 1493113 00:31:38.128 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1493113 00:31:38.128 12:15:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1493113 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:38.128 12:15:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:31:40.674 00:31:40.674 real 0m41.488s 00:31:40.674 user 1m47.037s 00:31:40.674 sys 0m11.656s 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:40.674 ************************************ 00:31:40.674 END TEST nvmf_host_multipath_status 00:31:40.674 ************************************ 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.674 ************************************ 00:31:40.674 START TEST nvmf_discovery_remove_ifc 00:31:40.674 ************************************ 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:40.674 * Looking for test storage... 00:31:40.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:40.674 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:40.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.675 --rc genhtml_branch_coverage=1 00:31:40.675 --rc genhtml_function_coverage=1 00:31:40.675 --rc genhtml_legend=1 00:31:40.675 --rc geninfo_all_blocks=1 00:31:40.675 --rc geninfo_unexecuted_blocks=1 00:31:40.675 00:31:40.675 ' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:40.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.675 --rc genhtml_branch_coverage=1 00:31:40.675 --rc genhtml_function_coverage=1 00:31:40.675 --rc genhtml_legend=1 00:31:40.675 --rc geninfo_all_blocks=1 00:31:40.675 --rc geninfo_unexecuted_blocks=1 00:31:40.675 00:31:40.675 ' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:40.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.675 --rc genhtml_branch_coverage=1 00:31:40.675 --rc genhtml_function_coverage=1 00:31:40.675 --rc genhtml_legend=1 00:31:40.675 --rc geninfo_all_blocks=1 00:31:40.675 --rc geninfo_unexecuted_blocks=1 00:31:40.675 00:31:40.675 ' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:40.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:40.675 --rc genhtml_branch_coverage=1 00:31:40.675 --rc genhtml_function_coverage=1 00:31:40.675 --rc genhtml_legend=1 00:31:40.675 --rc geninfo_all_blocks=1 00:31:40.675 --rc geninfo_unexecuted_blocks=1 00:31:40.675 00:31:40.675 ' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:31:40.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:40.675 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:31:40.676 12:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:48.818 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:48.818 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:48.818 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:48.819 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:48.819 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@247 -- # create_target_ns 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:48.819 10.0.0.1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:48.819 10.0.0.2 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:48.819 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:48.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:48.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.662 ms 00:31:48.820 00:31:48.820 --- 10.0.0.1 ping statistics --- 00:31:48.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.820 rtt min/avg/max/mdev = 0.662/0.662/0.662/0.000 ms 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:48.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:48.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:31:48.820 00:31:48.820 --- 10.0.0.2 ping statistics --- 00:31:48.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:48.820 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:48.820 12:15:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:48.820 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:31:48.821 ' 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=1504023 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 1504023 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1504023 ']' 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:48.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:48.821 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.821 [2024-12-05 12:15:13.165971] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:48.821 [2024-12-05 12:15:13.166032] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:48.821 [2024-12-05 12:15:13.264167] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:48.821 [2024-12-05 12:15:13.313904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:48.821 [2024-12-05 12:15:13.313949] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:48.821 [2024-12-05 12:15:13.313958] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:48.821 [2024-12-05 12:15:13.313965] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:48.821 [2024-12-05 12:15:13.313972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:48.821 [2024-12-05 12:15:13.314713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:49.081 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:49.081 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:49.081 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:49.081 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:49.081 12:15:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.081 [2024-12-05 12:15:14.034039] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:49.081 [2024-12-05 12:15:14.042289] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:49.081 null0 00:31:49.081 [2024-12-05 12:15:14.074240] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1504306 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1504306 /tmp/host.sock 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1504306 ']' 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:49.081 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:49.081 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.342 [2024-12-05 12:15:14.150930] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:31:49.342 [2024-12-05 12:15:14.150996] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504306 ] 00:31:49.342 [2024-12-05 12:15:14.244301] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:49.342 [2024-12-05 12:15:14.297135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 12:15:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:50.283 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.283 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:50.283 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.283 12:15:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.224 [2024-12-05 12:15:16.123507] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:51.224 [2024-12-05 12:15:16.123540] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:51.224 [2024-12-05 12:15:16.123556] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:51.224 [2024-12-05 12:15:16.253952] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:51.485 [2024-12-05 12:15:16.472313] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:31:51.485 [2024-12-05 12:15:16.473258] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1f81250:1 started. 00:31:51.485 [2024-12-05 12:15:16.474834] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:51.485 [2024-12-05 12:15:16.474877] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:51.485 [2024-12-05 12:15:16.474898] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:51.485 [2024-12-05 12:15:16.474911] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:51.485 [2024-12-05 12:15:16.474931] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.485 [2024-12-05 12:15:16.481942] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1f81250 was disconnected and freed. delete nvme_qpair. 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:51.485 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:51.745 12:15:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.686 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:52.946 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:52.946 12:15:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:53.888 12:15:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:54.829 12:15:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:56.215 12:15:20 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:57.156 [2024-12-05 12:15:21.915435] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:57.156 [2024-12-05 12:15:21.915477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.156 [2024-12-05 12:15:21.915487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.156 [2024-12-05 12:15:21.915494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.156 [2024-12-05 12:15:21.915500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.156 [2024-12-05 12:15:21.915506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.156 [2024-12-05 12:15:21.915511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.156 [2024-12-05 12:15:21.915516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.156 [2024-12-05 12:15:21.915522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.156 [2024-12-05 12:15:21.915528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.156 [2024-12-05 12:15:21.915533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.156 [2024-12-05 12:15:21.915538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5da50 is same with the state(6) to be set 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:57.156 [2024-12-05 12:15:21.925458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5da50 (9): Bad file descriptor 00:31:57.156 [2024-12-05 12:15:21.935489] bdev_nvme.c:2549:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:31:57.156 [2024-12-05 12:15:21.935498] bdev_nvme.c:2537:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:31:57.156 [2024-12-05 12:15:21.935503] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:57.156 [2024-12-05 12:15:21.935507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:57.156 [2024-12-05 12:15:21.935525] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:57.156 12:15:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:58.098 [2024-12-05 12:15:22.948539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:58.098 [2024-12-05 12:15:22.948631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f5da50 with addr=10.0.0.2, port=4420 00:31:58.098 [2024-12-05 12:15:22.948676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f5da50 is same with the state(6) to be set 00:31:58.098 [2024-12-05 12:15:22.948733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f5da50 (9): Bad file descriptor 00:31:58.098 [2024-12-05 12:15:22.948848] bdev_nvme.c:3172:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:31:58.098 [2024-12-05 12:15:22.948905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:58.098 [2024-12-05 12:15:22.948927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:58.098 [2024-12-05 12:15:22.948952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:58.098 [2024-12-05 12:15:22.948973] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:58.098 [2024-12-05 12:15:22.948989] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:58.098 [2024-12-05 12:15:22.949004] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:58.098 [2024-12-05 12:15:22.949026] bdev_nvme.c:2133:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:31:58.098 [2024-12-05 12:15:22.949041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.098 12:15:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:58.099 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:58.099 12:15:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:59.041 [2024-12-05 12:15:23.951447] bdev_nvme.c:2521:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:31:59.041 [2024-12-05 12:15:23.951465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:31:59.041 [2024-12-05 12:15:23.951474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:31:59.041 [2024-12-05 12:15:23.951480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:31:59.041 [2024-12-05 12:15:23.951486] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:31:59.041 [2024-12-05 12:15:23.951491] bdev_nvme.c:2511:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:31:59.041 [2024-12-05 12:15:23.951495] bdev_nvme.c:2278:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:31:59.041 [2024-12-05 12:15:23.951498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:31:59.041 [2024-12-05 12:15:23.951516] bdev_nvme.c:7262:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:59.041 [2024-12-05 12:15:23.951533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.041 [2024-12-05 12:15:23.951543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.041 [2024-12-05 12:15:23.951551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.041 [2024-12-05 12:15:23.951556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.041 [2024-12-05 12:15:23.951562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.041 [2024-12-05 12:15:23.951567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.041 [2024-12-05 12:15:23.951573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.041 [2024-12-05 12:15:23.951578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.041 [2024-12-05 12:15:23.951584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:59.041 [2024-12-05 12:15:23.951589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:59.041 [2024-12-05 12:15:23.951595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:31:59.041 [2024-12-05 12:15:23.952323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f4d1a0 (9): Bad file descriptor 00:31:59.041 [2024-12-05 12:15:23.953333] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:59.041 [2024-12-05 12:15:23.953342] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:59.041 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:59.300 12:15:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:00.283 12:15:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:01.221 [2024-12-05 12:15:25.967837] bdev_nvme.c:7511:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:01.221 [2024-12-05 12:15:25.967850] bdev_nvme.c:7597:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:01.221 [2024-12-05 12:15:25.967860] bdev_nvme.c:7474:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:01.221 [2024-12-05 12:15:26.096235] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:01.221 [2024-12-05 12:15:26.155778] bdev_nvme.c:5656:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:32:01.222 [2024-12-05 12:15:26.156470] bdev_nvme.c:1989:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1f5d7b0:1 started. 00:32:01.222 [2024-12-05 12:15:26.157383] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:01.222 [2024-12-05 12:15:26.157409] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:01.222 [2024-12-05 12:15:26.157424] bdev_nvme.c:8307:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:01.222 [2024-12-05 12:15:26.157435] bdev_nvme.c:7330:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:01.222 [2024-12-05 12:15:26.157440] bdev_nvme.c:7289:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:01.222 [2024-12-05 12:15:26.206062] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1f5d7b0 was disconnected and freed. delete nvme_qpair. 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1504306 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1504306 ']' 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1504306 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1504306 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1504306' 00:32:01.481 killing process with pid 1504306 00:32:01.481 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1504306 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1504306 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:01.482 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:01.482 rmmod nvme_tcp 00:32:01.482 rmmod nvme_fabrics 00:32:01.742 rmmod nvme_keyring 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 1504023 ']' 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 1504023 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1504023 ']' 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1504023 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1504023 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1504023' 00:32:01.742 killing process with pid 1504023 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1504023 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1504023 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:01.742 12:15:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:32:04.291 00:32:04.291 real 0m23.607s 00:32:04.291 user 0m27.575s 00:32:04.291 sys 0m7.314s 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:04.291 ************************************ 00:32:04.291 END TEST nvmf_discovery_remove_ifc 00:32:04.291 ************************************ 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:04.291 ************************************ 00:32:04.291 START TEST nvmf_identify_kernel_target 00:32:04.291 ************************************ 00:32:04.291 12:15:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:04.291 * Looking for test storage... 00:32:04.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.291 --rc genhtml_branch_coverage=1 00:32:04.291 --rc genhtml_function_coverage=1 00:32:04.291 --rc genhtml_legend=1 00:32:04.291 --rc geninfo_all_blocks=1 00:32:04.291 --rc geninfo_unexecuted_blocks=1 00:32:04.291 00:32:04.291 ' 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.291 --rc genhtml_branch_coverage=1 00:32:04.291 --rc genhtml_function_coverage=1 00:32:04.291 --rc genhtml_legend=1 00:32:04.291 --rc geninfo_all_blocks=1 00:32:04.291 --rc geninfo_unexecuted_blocks=1 00:32:04.291 00:32:04.291 ' 00:32:04.291 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:04.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.291 --rc genhtml_branch_coverage=1 00:32:04.291 --rc genhtml_function_coverage=1 00:32:04.291 --rc genhtml_legend=1 00:32:04.291 --rc geninfo_all_blocks=1 00:32:04.291 --rc geninfo_unexecuted_blocks=1 00:32:04.291 00:32:04.292 ' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:04.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:04.292 --rc genhtml_branch_coverage=1 00:32:04.292 --rc genhtml_function_coverage=1 00:32:04.292 --rc genhtml_legend=1 00:32:04.292 --rc geninfo_all_blocks=1 00:32:04.292 --rc geninfo_unexecuted_blocks=1 00:32:04.292 00:32:04.292 ' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:04.292 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:32:04.292 12:15:29 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:12.442 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:12.442 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:12.442 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:12.442 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@247 -- # create_target_ns 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:12.442 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:12.443 10.0.0.1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:12.443 10.0.0.2 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:12.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:12.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.569 ms 00:32:12.443 00:32:12.443 --- 10.0.0.1 ping statistics --- 00:32:12.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.443 rtt min/avg/max/mdev = 0.569/0.569/0.569/0.000 ms 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:12.443 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:12.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:12.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:32:12.444 00:32:12.444 --- 10.0.0.2 ping statistics --- 00:32:12.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:12.444 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:32:12.444 ' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:12.444 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:12.445 12:15:36 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:15.750 Waiting for block devices as requested 00:32:15.750 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:15.750 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:15.750 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:15.750 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:15.750 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:15.750 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:15.750 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:16.012 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:16.012 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:16.012 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:16.272 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:16.272 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:16.272 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:16.533 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:16.533 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:16.533 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:16.827 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:16.827 No valid GPT data, bailing 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:16.827 00:32:16.827 Discovery Log Number of Records 2, Generation counter 2 00:32:16.827 =====Discovery Log Entry 0====== 00:32:16.827 trtype: tcp 00:32:16.827 adrfam: ipv4 00:32:16.827 subtype: current discovery subsystem 00:32:16.827 treq: not specified, sq flow control disable supported 00:32:16.827 portid: 1 00:32:16.827 trsvcid: 4420 00:32:16.827 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:16.827 traddr: 10.0.0.1 00:32:16.827 eflags: none 00:32:16.827 sectype: none 00:32:16.827 =====Discovery Log Entry 1====== 00:32:16.827 trtype: tcp 00:32:16.827 adrfam: ipv4 00:32:16.827 subtype: nvme subsystem 00:32:16.827 treq: not specified, sq flow control disable supported 00:32:16.827 portid: 1 00:32:16.827 trsvcid: 4420 00:32:16.827 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:16.827 traddr: 10.0.0.1 00:32:16.827 eflags: none 00:32:16.827 sectype: none 00:32:16.827 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:16.827 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:17.116 ===================================================== 00:32:17.116 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:17.116 ===================================================== 00:32:17.116 Controller Capabilities/Features 00:32:17.116 ================================ 00:32:17.116 Vendor ID: 0000 00:32:17.116 Subsystem Vendor ID: 0000 00:32:17.116 Serial Number: 1fd87ba1253f9c4fcd18 00:32:17.116 Model Number: Linux 00:32:17.116 Firmware Version: 6.8.9-20 00:32:17.116 Recommended Arb Burst: 0 00:32:17.116 IEEE OUI Identifier: 00 00 00 00:32:17.116 Multi-path I/O 00:32:17.116 May have multiple subsystem ports: No 00:32:17.116 May have multiple controllers: No 00:32:17.116 Associated with SR-IOV VF: No 00:32:17.116 Max Data Transfer Size: Unlimited 00:32:17.116 Max Number of Namespaces: 0 00:32:17.116 Max Number of I/O Queues: 1024 00:32:17.116 NVMe Specification Version (VS): 1.3 00:32:17.116 NVMe Specification Version (Identify): 1.3 00:32:17.116 Maximum Queue Entries: 1024 00:32:17.116 Contiguous Queues Required: No 00:32:17.116 Arbitration Mechanisms Supported 00:32:17.116 Weighted Round Robin: Not Supported 00:32:17.116 Vendor Specific: Not Supported 00:32:17.116 Reset Timeout: 7500 ms 00:32:17.116 Doorbell Stride: 4 bytes 00:32:17.116 NVM Subsystem Reset: Not Supported 00:32:17.116 Command Sets Supported 00:32:17.116 NVM Command Set: Supported 00:32:17.116 Boot Partition: Not Supported 00:32:17.116 Memory Page Size Minimum: 4096 bytes 00:32:17.116 Memory Page Size Maximum: 4096 bytes 00:32:17.116 Persistent Memory Region: Not Supported 00:32:17.116 Optional Asynchronous Events Supported 00:32:17.116 Namespace Attribute Notices: Not Supported 00:32:17.116 Firmware Activation Notices: Not Supported 00:32:17.116 ANA Change Notices: Not Supported 00:32:17.116 PLE Aggregate Log Change Notices: Not Supported 00:32:17.116 LBA Status Info Alert Notices: Not Supported 00:32:17.116 EGE Aggregate Log Change Notices: Not Supported 00:32:17.116 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.116 Zone Descriptor Change Notices: Not Supported 00:32:17.116 Discovery Log Change Notices: Supported 00:32:17.116 Controller Attributes 00:32:17.116 128-bit Host Identifier: Not Supported 00:32:17.116 Non-Operational Permissive Mode: Not Supported 00:32:17.116 NVM Sets: Not Supported 00:32:17.116 Read Recovery Levels: Not Supported 00:32:17.116 Endurance Groups: Not Supported 00:32:17.116 Predictable Latency Mode: Not Supported 00:32:17.116 Traffic Based Keep ALive: Not Supported 00:32:17.116 Namespace Granularity: Not Supported 00:32:17.116 SQ Associations: Not Supported 00:32:17.116 UUID List: Not Supported 00:32:17.116 Multi-Domain Subsystem: Not Supported 00:32:17.116 Fixed Capacity Management: Not Supported 00:32:17.116 Variable Capacity Management: Not Supported 00:32:17.116 Delete Endurance Group: Not Supported 00:32:17.116 Delete NVM Set: Not Supported 00:32:17.116 Extended LBA Formats Supported: Not Supported 00:32:17.116 Flexible Data Placement Supported: Not Supported 00:32:17.116 00:32:17.116 Controller Memory Buffer Support 00:32:17.116 ================================ 00:32:17.116 Supported: No 00:32:17.116 00:32:17.116 Persistent Memory Region Support 00:32:17.116 ================================ 00:32:17.116 Supported: No 00:32:17.116 00:32:17.116 Admin Command Set Attributes 00:32:17.116 ============================ 00:32:17.116 Security Send/Receive: Not Supported 00:32:17.116 Format NVM: Not Supported 00:32:17.116 Firmware Activate/Download: Not Supported 00:32:17.116 Namespace Management: Not Supported 00:32:17.116 Device Self-Test: Not Supported 00:32:17.116 Directives: Not Supported 00:32:17.116 NVMe-MI: Not Supported 00:32:17.116 Virtualization Management: Not Supported 00:32:17.116 Doorbell Buffer Config: Not Supported 00:32:17.116 Get LBA Status Capability: Not Supported 00:32:17.116 Command & Feature Lockdown Capability: Not Supported 00:32:17.116 Abort Command Limit: 1 00:32:17.116 Async Event Request Limit: 1 00:32:17.116 Number of Firmware Slots: N/A 00:32:17.116 Firmware Slot 1 Read-Only: N/A 00:32:17.116 Firmware Activation Without Reset: N/A 00:32:17.116 Multiple Update Detection Support: N/A 00:32:17.116 Firmware Update Granularity: No Information Provided 00:32:17.116 Per-Namespace SMART Log: No 00:32:17.116 Asymmetric Namespace Access Log Page: Not Supported 00:32:17.116 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:17.116 Command Effects Log Page: Not Supported 00:32:17.116 Get Log Page Extended Data: Supported 00:32:17.116 Telemetry Log Pages: Not Supported 00:32:17.116 Persistent Event Log Pages: Not Supported 00:32:17.116 Supported Log Pages Log Page: May Support 00:32:17.116 Commands Supported & Effects Log Page: Not Supported 00:32:17.116 Feature Identifiers & Effects Log Page:May Support 00:32:17.116 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.116 Data Area 4 for Telemetry Log: Not Supported 00:32:17.116 Error Log Page Entries Supported: 1 00:32:17.116 Keep Alive: Not Supported 00:32:17.116 00:32:17.116 NVM Command Set Attributes 00:32:17.116 ========================== 00:32:17.116 Submission Queue Entry Size 00:32:17.116 Max: 1 00:32:17.116 Min: 1 00:32:17.116 Completion Queue Entry Size 00:32:17.116 Max: 1 00:32:17.116 Min: 1 00:32:17.116 Number of Namespaces: 0 00:32:17.116 Compare Command: Not Supported 00:32:17.116 Write Uncorrectable Command: Not Supported 00:32:17.116 Dataset Management Command: Not Supported 00:32:17.116 Write Zeroes Command: Not Supported 00:32:17.116 Set Features Save Field: Not Supported 00:32:17.116 Reservations: Not Supported 00:32:17.116 Timestamp: Not Supported 00:32:17.116 Copy: Not Supported 00:32:17.116 Volatile Write Cache: Not Present 00:32:17.116 Atomic Write Unit (Normal): 1 00:32:17.116 Atomic Write Unit (PFail): 1 00:32:17.116 Atomic Compare & Write Unit: 1 00:32:17.116 Fused Compare & Write: Not Supported 00:32:17.116 Scatter-Gather List 00:32:17.116 SGL Command Set: Supported 00:32:17.116 SGL Keyed: Not Supported 00:32:17.116 SGL Bit Bucket Descriptor: Not Supported 00:32:17.116 SGL Metadata Pointer: Not Supported 00:32:17.116 Oversized SGL: Not Supported 00:32:17.116 SGL Metadata Address: Not Supported 00:32:17.117 SGL Offset: Supported 00:32:17.117 Transport SGL Data Block: Not Supported 00:32:17.117 Replay Protected Memory Block: Not Supported 00:32:17.117 00:32:17.117 Firmware Slot Information 00:32:17.117 ========================= 00:32:17.117 Active slot: 0 00:32:17.117 00:32:17.117 00:32:17.117 Error Log 00:32:17.117 ========= 00:32:17.117 00:32:17.117 Active Namespaces 00:32:17.117 ================= 00:32:17.117 Discovery Log Page 00:32:17.117 ================== 00:32:17.117 Generation Counter: 2 00:32:17.117 Number of Records: 2 00:32:17.117 Record Format: 0 00:32:17.117 00:32:17.117 Discovery Log Entry 0 00:32:17.117 ---------------------- 00:32:17.117 Transport Type: 3 (TCP) 00:32:17.117 Address Family: 1 (IPv4) 00:32:17.117 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:17.117 Entry Flags: 00:32:17.117 Duplicate Returned Information: 0 00:32:17.117 Explicit Persistent Connection Support for Discovery: 0 00:32:17.117 Transport Requirements: 00:32:17.117 Secure Channel: Not Specified 00:32:17.117 Port ID: 1 (0x0001) 00:32:17.117 Controller ID: 65535 (0xffff) 00:32:17.117 Admin Max SQ Size: 32 00:32:17.117 Transport Service Identifier: 4420 00:32:17.117 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:17.117 Transport Address: 10.0.0.1 00:32:17.117 Discovery Log Entry 1 00:32:17.117 ---------------------- 00:32:17.117 Transport Type: 3 (TCP) 00:32:17.117 Address Family: 1 (IPv4) 00:32:17.117 Subsystem Type: 2 (NVM Subsystem) 00:32:17.117 Entry Flags: 00:32:17.117 Duplicate Returned Information: 0 00:32:17.117 Explicit Persistent Connection Support for Discovery: 0 00:32:17.117 Transport Requirements: 00:32:17.117 Secure Channel: Not Specified 00:32:17.117 Port ID: 1 (0x0001) 00:32:17.117 Controller ID: 65535 (0xffff) 00:32:17.117 Admin Max SQ Size: 32 00:32:17.117 Transport Service Identifier: 4420 00:32:17.117 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:17.117 Transport Address: 10.0.0.1 00:32:17.117 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:17.117 get_feature(0x01) failed 00:32:17.117 get_feature(0x02) failed 00:32:17.117 get_feature(0x04) failed 00:32:17.117 ===================================================== 00:32:17.117 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:17.117 ===================================================== 00:32:17.117 Controller Capabilities/Features 00:32:17.117 ================================ 00:32:17.117 Vendor ID: 0000 00:32:17.117 Subsystem Vendor ID: 0000 00:32:17.117 Serial Number: d67ca9c1f878e72fd88e 00:32:17.117 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:17.117 Firmware Version: 6.8.9-20 00:32:17.117 Recommended Arb Burst: 6 00:32:17.117 IEEE OUI Identifier: 00 00 00 00:32:17.117 Multi-path I/O 00:32:17.117 May have multiple subsystem ports: Yes 00:32:17.117 May have multiple controllers: Yes 00:32:17.117 Associated with SR-IOV VF: No 00:32:17.117 Max Data Transfer Size: Unlimited 00:32:17.117 Max Number of Namespaces: 1024 00:32:17.117 Max Number of I/O Queues: 128 00:32:17.117 NVMe Specification Version (VS): 1.3 00:32:17.117 NVMe Specification Version (Identify): 1.3 00:32:17.117 Maximum Queue Entries: 1024 00:32:17.117 Contiguous Queues Required: No 00:32:17.117 Arbitration Mechanisms Supported 00:32:17.117 Weighted Round Robin: Not Supported 00:32:17.117 Vendor Specific: Not Supported 00:32:17.117 Reset Timeout: 7500 ms 00:32:17.117 Doorbell Stride: 4 bytes 00:32:17.117 NVM Subsystem Reset: Not Supported 00:32:17.117 Command Sets Supported 00:32:17.117 NVM Command Set: Supported 00:32:17.117 Boot Partition: Not Supported 00:32:17.117 Memory Page Size Minimum: 4096 bytes 00:32:17.117 Memory Page Size Maximum: 4096 bytes 00:32:17.117 Persistent Memory Region: Not Supported 00:32:17.117 Optional Asynchronous Events Supported 00:32:17.117 Namespace Attribute Notices: Supported 00:32:17.117 Firmware Activation Notices: Not Supported 00:32:17.117 ANA Change Notices: Supported 00:32:17.117 PLE Aggregate Log Change Notices: Not Supported 00:32:17.117 LBA Status Info Alert Notices: Not Supported 00:32:17.117 EGE Aggregate Log Change Notices: Not Supported 00:32:17.117 Normal NVM Subsystem Shutdown event: Not Supported 00:32:17.117 Zone Descriptor Change Notices: Not Supported 00:32:17.117 Discovery Log Change Notices: Not Supported 00:32:17.117 Controller Attributes 00:32:17.117 128-bit Host Identifier: Supported 00:32:17.117 Non-Operational Permissive Mode: Not Supported 00:32:17.117 NVM Sets: Not Supported 00:32:17.117 Read Recovery Levels: Not Supported 00:32:17.117 Endurance Groups: Not Supported 00:32:17.117 Predictable Latency Mode: Not Supported 00:32:17.117 Traffic Based Keep ALive: Supported 00:32:17.117 Namespace Granularity: Not Supported 00:32:17.117 SQ Associations: Not Supported 00:32:17.117 UUID List: Not Supported 00:32:17.117 Multi-Domain Subsystem: Not Supported 00:32:17.117 Fixed Capacity Management: Not Supported 00:32:17.117 Variable Capacity Management: Not Supported 00:32:17.117 Delete Endurance Group: Not Supported 00:32:17.117 Delete NVM Set: Not Supported 00:32:17.117 Extended LBA Formats Supported: Not Supported 00:32:17.117 Flexible Data Placement Supported: Not Supported 00:32:17.117 00:32:17.117 Controller Memory Buffer Support 00:32:17.117 ================================ 00:32:17.117 Supported: No 00:32:17.117 00:32:17.117 Persistent Memory Region Support 00:32:17.117 ================================ 00:32:17.117 Supported: No 00:32:17.117 00:32:17.117 Admin Command Set Attributes 00:32:17.117 ============================ 00:32:17.117 Security Send/Receive: Not Supported 00:32:17.117 Format NVM: Not Supported 00:32:17.117 Firmware Activate/Download: Not Supported 00:32:17.117 Namespace Management: Not Supported 00:32:17.117 Device Self-Test: Not Supported 00:32:17.117 Directives: Not Supported 00:32:17.117 NVMe-MI: Not Supported 00:32:17.117 Virtualization Management: Not Supported 00:32:17.117 Doorbell Buffer Config: Not Supported 00:32:17.117 Get LBA Status Capability: Not Supported 00:32:17.117 Command & Feature Lockdown Capability: Not Supported 00:32:17.117 Abort Command Limit: 4 00:32:17.117 Async Event Request Limit: 4 00:32:17.117 Number of Firmware Slots: N/A 00:32:17.118 Firmware Slot 1 Read-Only: N/A 00:32:17.118 Firmware Activation Without Reset: N/A 00:32:17.118 Multiple Update Detection Support: N/A 00:32:17.118 Firmware Update Granularity: No Information Provided 00:32:17.118 Per-Namespace SMART Log: Yes 00:32:17.118 Asymmetric Namespace Access Log Page: Supported 00:32:17.118 ANA Transition Time : 10 sec 00:32:17.118 00:32:17.118 Asymmetric Namespace Access Capabilities 00:32:17.118 ANA Optimized State : Supported 00:32:17.118 ANA Non-Optimized State : Supported 00:32:17.118 ANA Inaccessible State : Supported 00:32:17.118 ANA Persistent Loss State : Supported 00:32:17.118 ANA Change State : Supported 00:32:17.118 ANAGRPID is not changed : No 00:32:17.118 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:17.118 00:32:17.118 ANA Group Identifier Maximum : 128 00:32:17.118 Number of ANA Group Identifiers : 128 00:32:17.118 Max Number of Allowed Namespaces : 1024 00:32:17.118 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:17.118 Command Effects Log Page: Supported 00:32:17.118 Get Log Page Extended Data: Supported 00:32:17.118 Telemetry Log Pages: Not Supported 00:32:17.118 Persistent Event Log Pages: Not Supported 00:32:17.118 Supported Log Pages Log Page: May Support 00:32:17.118 Commands Supported & Effects Log Page: Not Supported 00:32:17.118 Feature Identifiers & Effects Log Page:May Support 00:32:17.118 NVMe-MI Commands & Effects Log Page: May Support 00:32:17.118 Data Area 4 for Telemetry Log: Not Supported 00:32:17.118 Error Log Page Entries Supported: 128 00:32:17.118 Keep Alive: Supported 00:32:17.118 Keep Alive Granularity: 1000 ms 00:32:17.118 00:32:17.118 NVM Command Set Attributes 00:32:17.118 ========================== 00:32:17.118 Submission Queue Entry Size 00:32:17.118 Max: 64 00:32:17.118 Min: 64 00:32:17.118 Completion Queue Entry Size 00:32:17.118 Max: 16 00:32:17.118 Min: 16 00:32:17.118 Number of Namespaces: 1024 00:32:17.118 Compare Command: Not Supported 00:32:17.118 Write Uncorrectable Command: Not Supported 00:32:17.118 Dataset Management Command: Supported 00:32:17.118 Write Zeroes Command: Supported 00:32:17.118 Set Features Save Field: Not Supported 00:32:17.118 Reservations: Not Supported 00:32:17.118 Timestamp: Not Supported 00:32:17.118 Copy: Not Supported 00:32:17.118 Volatile Write Cache: Present 00:32:17.118 Atomic Write Unit (Normal): 1 00:32:17.118 Atomic Write Unit (PFail): 1 00:32:17.118 Atomic Compare & Write Unit: 1 00:32:17.118 Fused Compare & Write: Not Supported 00:32:17.118 Scatter-Gather List 00:32:17.118 SGL Command Set: Supported 00:32:17.118 SGL Keyed: Not Supported 00:32:17.118 SGL Bit Bucket Descriptor: Not Supported 00:32:17.118 SGL Metadata Pointer: Not Supported 00:32:17.118 Oversized SGL: Not Supported 00:32:17.118 SGL Metadata Address: Not Supported 00:32:17.118 SGL Offset: Supported 00:32:17.118 Transport SGL Data Block: Not Supported 00:32:17.118 Replay Protected Memory Block: Not Supported 00:32:17.118 00:32:17.118 Firmware Slot Information 00:32:17.118 ========================= 00:32:17.118 Active slot: 0 00:32:17.118 00:32:17.118 Asymmetric Namespace Access 00:32:17.118 =========================== 00:32:17.118 Change Count : 0 00:32:17.118 Number of ANA Group Descriptors : 1 00:32:17.118 ANA Group Descriptor : 0 00:32:17.118 ANA Group ID : 1 00:32:17.118 Number of NSID Values : 1 00:32:17.118 Change Count : 0 00:32:17.118 ANA State : 1 00:32:17.118 Namespace Identifier : 1 00:32:17.118 00:32:17.118 Commands Supported and Effects 00:32:17.118 ============================== 00:32:17.118 Admin Commands 00:32:17.118 -------------- 00:32:17.118 Get Log Page (02h): Supported 00:32:17.118 Identify (06h): Supported 00:32:17.118 Abort (08h): Supported 00:32:17.118 Set Features (09h): Supported 00:32:17.118 Get Features (0Ah): Supported 00:32:17.118 Asynchronous Event Request (0Ch): Supported 00:32:17.118 Keep Alive (18h): Supported 00:32:17.118 I/O Commands 00:32:17.118 ------------ 00:32:17.118 Flush (00h): Supported 00:32:17.118 Write (01h): Supported LBA-Change 00:32:17.118 Read (02h): Supported 00:32:17.118 Write Zeroes (08h): Supported LBA-Change 00:32:17.118 Dataset Management (09h): Supported 00:32:17.118 00:32:17.118 Error Log 00:32:17.118 ========= 00:32:17.118 Entry: 0 00:32:17.118 Error Count: 0x3 00:32:17.118 Submission Queue Id: 0x0 00:32:17.118 Command Id: 0x5 00:32:17.118 Phase Bit: 0 00:32:17.118 Status Code: 0x2 00:32:17.118 Status Code Type: 0x0 00:32:17.118 Do Not Retry: 1 00:32:17.118 Error Location: 0x28 00:32:17.118 LBA: 0x0 00:32:17.118 Namespace: 0x0 00:32:17.118 Vendor Log Page: 0x0 00:32:17.118 ----------- 00:32:17.118 Entry: 1 00:32:17.118 Error Count: 0x2 00:32:17.118 Submission Queue Id: 0x0 00:32:17.118 Command Id: 0x5 00:32:17.118 Phase Bit: 0 00:32:17.118 Status Code: 0x2 00:32:17.118 Status Code Type: 0x0 00:32:17.118 Do Not Retry: 1 00:32:17.118 Error Location: 0x28 00:32:17.118 LBA: 0x0 00:32:17.118 Namespace: 0x0 00:32:17.118 Vendor Log Page: 0x0 00:32:17.118 ----------- 00:32:17.118 Entry: 2 00:32:17.118 Error Count: 0x1 00:32:17.118 Submission Queue Id: 0x0 00:32:17.118 Command Id: 0x4 00:32:17.118 Phase Bit: 0 00:32:17.118 Status Code: 0x2 00:32:17.118 Status Code Type: 0x0 00:32:17.118 Do Not Retry: 1 00:32:17.118 Error Location: 0x28 00:32:17.118 LBA: 0x0 00:32:17.118 Namespace: 0x0 00:32:17.118 Vendor Log Page: 0x0 00:32:17.118 00:32:17.118 Number of Queues 00:32:17.118 ================ 00:32:17.118 Number of I/O Submission Queues: 128 00:32:17.118 Number of I/O Completion Queues: 128 00:32:17.118 00:32:17.118 ZNS Specific Controller Data 00:32:17.118 ============================ 00:32:17.118 Zone Append Size Limit: 0 00:32:17.118 00:32:17.118 00:32:17.118 Active Namespaces 00:32:17.118 ================= 00:32:17.118 get_feature(0x05) failed 00:32:17.118 Namespace ID:1 00:32:17.118 Command Set Identifier: NVM (00h) 00:32:17.118 Deallocate: Supported 00:32:17.119 Deallocated/Unwritten Error: Not Supported 00:32:17.119 Deallocated Read Value: Unknown 00:32:17.119 Deallocate in Write Zeroes: Not Supported 00:32:17.119 Deallocated Guard Field: 0xFFFF 00:32:17.119 Flush: Supported 00:32:17.119 Reservation: Not Supported 00:32:17.119 Namespace Sharing Capabilities: Multiple Controllers 00:32:17.119 Size (in LBAs): 3750748848 (1788GiB) 00:32:17.119 Capacity (in LBAs): 3750748848 (1788GiB) 00:32:17.119 Utilization (in LBAs): 3750748848 (1788GiB) 00:32:17.119 UUID: bbcf31cb-3843-476c-a9ac-6195e3f47c45 00:32:17.119 Thin Provisioning: Not Supported 00:32:17.119 Per-NS Atomic Units: Yes 00:32:17.119 Atomic Write Unit (Normal): 8 00:32:17.119 Atomic Write Unit (PFail): 8 00:32:17.119 Preferred Write Granularity: 8 00:32:17.119 Atomic Compare & Write Unit: 8 00:32:17.119 Atomic Boundary Size (Normal): 0 00:32:17.119 Atomic Boundary Size (PFail): 0 00:32:17.119 Atomic Boundary Offset: 0 00:32:17.119 NGUID/EUI64 Never Reused: No 00:32:17.119 ANA group ID: 1 00:32:17.119 Namespace Write Protected: No 00:32:17.119 Number of LBA Formats: 1 00:32:17.119 Current LBA Format: LBA Format #00 00:32:17.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:17.119 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:17.119 12:15:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:17.119 rmmod nvme_tcp 00:32:17.119 rmmod nvme_fabrics 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:17.119 12:15:42 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:19.033 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:32:19.294 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:32:19.295 12:15:44 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:22.598 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:22.598 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:22.598 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:22.598 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:22.860 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:23.120 00:32:23.120 real 0m19.015s 00:32:23.120 user 0m5.173s 00:32:23.120 sys 0m10.975s 00:32:23.120 12:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:23.120 12:15:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:23.120 ************************************ 00:32:23.120 END TEST nvmf_identify_kernel_target 00:32:23.120 ************************************ 00:32:23.120 12:15:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:23.120 12:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:23.120 12:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:23.120 12:15:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.120 ************************************ 00:32:23.120 START TEST nvmf_auth_host 00:32:23.121 ************************************ 00:32:23.121 12:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:23.121 * Looking for test storage... 00:32:23.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:23.121 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:23.121 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:32:23.121 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:23.382 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.383 --rc genhtml_branch_coverage=1 00:32:23.383 --rc genhtml_function_coverage=1 00:32:23.383 --rc genhtml_legend=1 00:32:23.383 --rc geninfo_all_blocks=1 00:32:23.383 --rc geninfo_unexecuted_blocks=1 00:32:23.383 00:32:23.383 ' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.383 --rc genhtml_branch_coverage=1 00:32:23.383 --rc genhtml_function_coverage=1 00:32:23.383 --rc genhtml_legend=1 00:32:23.383 --rc geninfo_all_blocks=1 00:32:23.383 --rc geninfo_unexecuted_blocks=1 00:32:23.383 00:32:23.383 ' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.383 --rc genhtml_branch_coverage=1 00:32:23.383 --rc genhtml_function_coverage=1 00:32:23.383 --rc genhtml_legend=1 00:32:23.383 --rc geninfo_all_blocks=1 00:32:23.383 --rc geninfo_unexecuted_blocks=1 00:32:23.383 00:32:23.383 ' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:23.383 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:23.383 --rc genhtml_branch_coverage=1 00:32:23.383 --rc genhtml_function_coverage=1 00:32:23.383 --rc genhtml_legend=1 00:32:23.383 --rc geninfo_all_blocks=1 00:32:23.383 --rc geninfo_unexecuted_blocks=1 00:32:23.383 00:32:23.383 ' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:23.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:23.383 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:32:23.384 12:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:31.528 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:31.528 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:31.528 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:31.529 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:31.529 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@247 -- # create_target_ns 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:31.529 10.0.0.1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:31.529 10.0.0.2 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:31.529 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:31.530 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:31.530 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.648 ms 00:32:31.530 00:32:31.530 --- 10.0.0.1 ping statistics --- 00:32:31.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.530 rtt min/avg/max/mdev = 0.648/0.648/0.648/0.000 ms 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:31.530 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:31.530 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:32:31.530 00:32:31.530 --- 10.0.0.2 ping statistics --- 00:32:31.530 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:31.530 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:32:31.530 ' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.530 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=1518548 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 1518548 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1518548 ']' 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:31.531 12:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:31.792 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=ae2d78cf6cf88fbbd692665d5b14faec 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.hFD 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key ae2d78cf6cf88fbbd692665d5b14faec 0 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 ae2d78cf6cf88fbbd692665d5b14faec 0 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=ae2d78cf6cf88fbbd692665d5b14faec 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.hFD 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.hFD 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.hFD 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=fea082ef0892d884833247c3c7bb972fdf044dc87e3d81d2dc15a8c3606a74f5 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.6pC 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key fea082ef0892d884833247c3c7bb972fdf044dc87e3d81d2dc15a8c3606a74f5 3 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 fea082ef0892d884833247c3c7bb972fdf044dc87e3d81d2dc15a8c3606a74f5 3 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=fea082ef0892d884833247c3c7bb972fdf044dc87e3d81d2dc15a8c3606a74f5 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.6pC 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.6pC 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.6pC 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=324cc8cb14a3c4a4c36c86260083cc8e8c5bbbc20dc8648d 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.CNp 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 324cc8cb14a3c4a4c36c86260083cc8e8c5bbbc20dc8648d 0 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 324cc8cb14a3c4a4c36c86260083cc8e8c5bbbc20dc8648d 0 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=324cc8cb14a3c4a4c36c86260083cc8e8c5bbbc20dc8648d 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:32:32.053 12:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.053 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.CNp 00:32:32.053 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.CNp 00:32:32.053 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.CNp 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c6ea3254c26ab61eb4638630e42f23df2a96b6977f9693cb 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.FhR 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c6ea3254c26ab61eb4638630e42f23df2a96b6977f9693cb 2 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c6ea3254c26ab61eb4638630e42f23df2a96b6977f9693cb 2 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c6ea3254c26ab61eb4638630e42f23df2a96b6977f9693cb 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.FhR 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.FhR 00:32:32.054 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.FhR 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d7e4b71898a5c6b257ae9d9190c1ff55 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.vwp 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d7e4b71898a5c6b257ae9d9190c1ff55 1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d7e4b71898a5c6b257ae9d9190c1ff55 1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d7e4b71898a5c6b257ae9d9190c1ff55 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.vwp 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.vwp 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.vwp 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=5759b561c6b72c1b243292a5b3eb0bdb 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.obb 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 5759b561c6b72c1b243292a5b3eb0bdb 1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 5759b561c6b72c1b243292a5b3eb0bdb 1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=5759b561c6b72c1b243292a5b3eb0bdb 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.obb 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.obb 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.obb 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=71e86efba9d775899d4ad34a9e801fe17d2624e0eb730f5d 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.qHi 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 71e86efba9d775899d4ad34a9e801fe17d2624e0eb730f5d 2 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 71e86efba9d775899d4ad34a9e801fe17d2624e0eb730f5d 2 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=71e86efba9d775899d4ad34a9e801fe17d2624e0eb730f5d 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.qHi 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.qHi 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.qHi 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=64ae1d5cd4b645b8a845a206def9654d 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:32:32.316 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.ZvO 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 64ae1d5cd4b645b8a845a206def9654d 0 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 64ae1d5cd4b645b8a845a206def9654d 0 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=64ae1d5cd4b645b8a845a206def9654d 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:32:32.317 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.ZvO 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.ZvO 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ZvO 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=91305ff4ee8a54cc2aaaf7ef32ccc884986ea608186b4a16d03a41a9fe974f1e 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Kq8 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 91305ff4ee8a54cc2aaaf7ef32ccc884986ea608186b4a16d03a41a9fe974f1e 3 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 91305ff4ee8a54cc2aaaf7ef32ccc884986ea608186b4a16d03a41a9fe974f1e 3 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=91305ff4ee8a54cc2aaaf7ef32ccc884986ea608186b4a16d03a41a9fe974f1e 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Kq8 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Kq8 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Kq8 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1518548 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1518548 ']' 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.578 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.hFD 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.840 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.6pC ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.6pC 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.CNp 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.FhR ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.FhR 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.vwp 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.obb ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.obb 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.qHi 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ZvO ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ZvO 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Kq8 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:32.841 12:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:36.149 Waiting for block devices as requested 00:32:36.149 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:36.409 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:36.409 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:36.409 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:36.669 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:36.669 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:36.669 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:36.929 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:36.929 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:36.929 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:37.190 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:37.190 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:37.190 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:37.190 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:37.451 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:37.451 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:37.451 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:32:38.025 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:38.286 No valid GPT data, bailing 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:38.286 00:32:38.286 Discovery Log Number of Records 2, Generation counter 2 00:32:38.286 =====Discovery Log Entry 0====== 00:32:38.286 trtype: tcp 00:32:38.286 adrfam: ipv4 00:32:38.286 subtype: current discovery subsystem 00:32:38.286 treq: not specified, sq flow control disable supported 00:32:38.286 portid: 1 00:32:38.286 trsvcid: 4420 00:32:38.286 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:38.286 traddr: 10.0.0.1 00:32:38.286 eflags: none 00:32:38.286 sectype: none 00:32:38.286 =====Discovery Log Entry 1====== 00:32:38.286 trtype: tcp 00:32:38.286 adrfam: ipv4 00:32:38.286 subtype: nvme subsystem 00:32:38.286 treq: not specified, sq flow control disable supported 00:32:38.286 portid: 1 00:32:38.286 trsvcid: 4420 00:32:38.286 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:38.286 traddr: 10.0.0.1 00:32:38.286 eflags: none 00:32:38.286 sectype: none 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:38.286 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.287 nvme0n1 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.287 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.547 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.548 nvme0n1 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.548 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.810 nvme0n1 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:38.810 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.072 12:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 nvme0n1 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.072 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.333 nvme0n1 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.333 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.334 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 nvme0n1 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.595 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.856 nvme0n1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:39.857 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.118 nvme0n1 00:32:40.118 12:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.118 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.378 nvme0n1 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.378 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.637 nvme0n1 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:40.637 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.638 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.898 nvme0n1 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:40.898 12:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.157 nvme0n1 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:41.157 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:41.158 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.416 nvme0n1 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.416 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:41.675 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:41.676 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.676 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.935 nvme0n1 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:41.935 12:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.195 nvme0n1 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.195 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.455 nvme0n1 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.455 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:42.714 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.715 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.973 nvme0n1 00:32:42.973 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.973 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.973 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.973 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.973 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.973 12:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.232 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.492 nvme0n1 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:43.492 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:43.751 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.751 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:43.751 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.010 nvme0n1 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.010 12:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:44.010 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:44.011 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:44.011 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:44.011 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.011 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.011 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.580 nvme0n1 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:44.580 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.149 nvme0n1 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.149 12:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.719 nvme0n1 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:45.719 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:45.720 12:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.290 nvme0n1 00:32:46.290 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.290 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.290 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.290 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.290 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:46.550 12:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 nvme0n1 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:47.118 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:47.119 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.058 nvme0n1 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.058 12:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.628 nvme0n1 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:48.628 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.629 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.889 nvme0n1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.889 nvme0n1 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:48.889 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.150 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.151 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.151 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:49.151 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.151 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.151 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.151 12:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.151 nvme0n1 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.151 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.411 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.412 nvme0n1 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.412 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.673 nvme0n1 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.673 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.933 nvme0n1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:49.933 12:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.194 nvme0n1 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.194 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.454 nvme0n1 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:50.454 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.455 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.715 nvme0n1 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:50.715 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 nvme0n1 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:50.975 12:16:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:50.975 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.235 nvme0n1 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.235 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.495 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.756 nvme0n1 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:51.756 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 nvme0n1 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.016 12:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.016 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.017 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.277 nvme0n1 00:32:52.277 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.277 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.277 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.277 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.277 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.277 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.537 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.798 nvme0n1 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:52.798 12:16:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.370 nvme0n1 00:32:53.370 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.370 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.370 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.370 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.370 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.371 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.632 nvme0n1 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:53.632 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:53.893 12:16:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.153 nvme0n1 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.153 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.724 nvme0n1 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.724 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.725 12:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.297 nvme0n1 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.297 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.868 nvme0n1 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.868 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.869 12:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.440 nvme0n1 00:32:56.440 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.440 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.440 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.440 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.440 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.725 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.726 12:16:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.300 nvme0n1 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.300 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.872 nvme0n1 00:32:57.872 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.872 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.872 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.872 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.872 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:58.134 12:16:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.134 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.707 nvme0n1 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.707 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.969 nvme0n1 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:58.969 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:58.970 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:58.970 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.970 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.970 12:16:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.230 nvme0n1 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.230 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.490 nvme0n1 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:32:59.490 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.491 nvme0n1 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.491 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:59.751 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.752 nvme0n1 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.752 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.012 12:16:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.012 nvme0n1 00:33:00.012 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.012 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.012 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.012 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.012 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.012 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.279 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.279 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.280 nvme0n1 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.280 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.540 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.541 nvme0n1 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.541 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.801 nvme0n1 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.801 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:01.061 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.062 12:16:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 nvme0n1 00:33:01.062 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.062 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.062 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.062 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.062 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.062 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:01.322 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.323 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.583 nvme0n1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.583 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.843 nvme0n1 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.843 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.844 12:16:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.104 nvme0n1 00:33:02.104 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.104 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.104 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.104 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.104 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.105 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.105 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.105 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.105 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.105 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.365 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.366 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.627 nvme0n1 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.627 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.628 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.890 nvme0n1 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.890 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.891 12:16:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.462 nvme0n1 00:33:03.462 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.462 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.462 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.462 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.462 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.462 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.463 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.723 nvme0n1 00:33:03.723 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.723 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.723 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.723 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.723 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:03.984 12:16:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.245 nvme0n1 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.245 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.506 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.767 nvme0n1 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:04.767 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.768 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.058 12:16:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.383 nvme0n1 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:05.383 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YWUyZDc4Y2Y2Y2Y4OGZiYmQ2OTI2NjVkNWIxNGZhZWPB8GgN: 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: ]] 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZmVhMDgyZWYwODkyZDg4NDgzMzI0N2MzYzdiYjk3MmZkZjA0NGRjODdlM2Q4MWQyZGMxNWE4YzM2MDZhNzRmNV9stFM=: 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.384 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.963 nvme0n1 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.963 12:16:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.963 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.224 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.795 nvme0n1 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.795 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:06.796 12:16:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.366 nvme0n1 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.366 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NzFlODZlZmJhOWQ3NzU4OTlkNGFkMzRhOWU4MDFmZTE3ZDI2MjRlMGViNzMwZjVk8LVIUw==: 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: ]] 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NjRhZTFkNWNkNGI2NDViOGE4NDVhMjA2ZGVmOTY1NGRj2qk0: 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:07.626 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:07.627 12:16:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.196 nvme0n1 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OTEzMDVmZjRlZThhNTRjYzJhYWFmN2VmMzJjY2M4ODQ5ODZlYTYwODE4NmI0YTE2ZDAzYTQxYTlmZTk3NGYxZV/eU2c=: 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:08.196 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.197 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.197 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.767 nvme0n1 00:33:08.767 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:08.767 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.767 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.767 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:08.767 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.767 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.028 request: 00:33:09.028 { 00:33:09.028 "name": "nvme0", 00:33:09.028 "trtype": "tcp", 00:33:09.028 "traddr": "10.0.0.1", 00:33:09.028 "adrfam": "ipv4", 00:33:09.028 "trsvcid": "4420", 00:33:09.028 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.028 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.028 "prchk_reftag": false, 00:33:09.028 "prchk_guard": false, 00:33:09.028 "hdgst": false, 00:33:09.028 "ddgst": false, 00:33:09.028 "allow_unrecognized_csi": false, 00:33:09.028 "method": "bdev_nvme_attach_controller", 00:33:09.028 "req_id": 1 00:33:09.028 } 00:33:09.028 Got JSON-RPC error response 00:33:09.028 response: 00:33:09.028 { 00:33:09.028 "code": -5, 00:33:09.028 "message": "Input/output error" 00:33:09.028 } 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.028 12:16:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.028 request: 00:33:09.028 { 00:33:09.028 "name": "nvme0", 00:33:09.028 "trtype": "tcp", 00:33:09.028 "traddr": "10.0.0.1", 00:33:09.028 "adrfam": "ipv4", 00:33:09.028 "trsvcid": "4420", 00:33:09.028 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.028 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.028 "prchk_reftag": false, 00:33:09.029 "prchk_guard": false, 00:33:09.029 "hdgst": false, 00:33:09.029 "ddgst": false, 00:33:09.029 "dhchap_key": "key2", 00:33:09.029 "allow_unrecognized_csi": false, 00:33:09.029 "method": "bdev_nvme_attach_controller", 00:33:09.029 "req_id": 1 00:33:09.029 } 00:33:09.029 Got JSON-RPC error response 00:33:09.029 response: 00:33:09.029 { 00:33:09.029 "code": -5, 00:33:09.029 "message": "Input/output error" 00:33:09.029 } 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.029 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 request: 00:33:09.289 { 00:33:09.289 "name": "nvme0", 00:33:09.289 "trtype": "tcp", 00:33:09.289 "traddr": "10.0.0.1", 00:33:09.289 "adrfam": "ipv4", 00:33:09.289 "trsvcid": "4420", 00:33:09.289 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:09.289 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:09.289 "prchk_reftag": false, 00:33:09.289 "prchk_guard": false, 00:33:09.289 "hdgst": false, 00:33:09.289 "ddgst": false, 00:33:09.289 "dhchap_key": "key1", 00:33:09.289 "dhchap_ctrlr_key": "ckey2", 00:33:09.289 "allow_unrecognized_csi": false, 00:33:09.289 "method": "bdev_nvme_attach_controller", 00:33:09.289 "req_id": 1 00:33:09.289 } 00:33:09.289 Got JSON-RPC error response 00:33:09.289 response: 00:33:09.289 { 00:33:09.289 "code": -5, 00:33:09.289 "message": "Input/output error" 00:33:09.289 } 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.289 nvme0n1 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.289 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.549 request: 00:33:09.549 { 00:33:09.549 "name": "nvme0", 00:33:09.549 "dhchap_key": "key1", 00:33:09.549 "dhchap_ctrlr_key": "ckey2", 00:33:09.549 "method": "bdev_nvme_set_keys", 00:33:09.549 "req_id": 1 00:33:09.549 } 00:33:09.549 Got JSON-RPC error response 00:33:09.549 response: 00:33:09.549 { 00:33:09.549 "code": -13, 00:33:09.549 "message": "Permission denied" 00:33:09.549 } 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:33:09.549 12:16:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:33:10.488 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.488 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:33:10.488 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.488 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.488 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI0Y2M4Y2IxNGEzYzRhNGMzNmM4NjI2MDA4M2NjOGU4YzViYmJjMjBkYzg2NDhkq97oeA==: 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: ]] 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:YzZlYTMyNTRjMjZhYjYxZWI0NjM4NjMwZTQyZjIzZGYyYTk2YjY5NzdmOTY5M2Nia+6sqQ==: 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:10.747 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.748 nvme0n1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZDdlNGI3MTg5OGE1YzZiMjU3YWU5ZDkxOTBjMWZmNTV7iFPV: 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: ]] 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTc1OWI1NjFjNmI3MmMxYjI0MzI5MmE1YjNlYjBiZGKS8qyi: 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.748 request: 00:33:10.748 { 00:33:10.748 "name": "nvme0", 00:33:10.748 "dhchap_key": "key2", 00:33:10.748 "dhchap_ctrlr_key": "ckey1", 00:33:10.748 "method": "bdev_nvme_set_keys", 00:33:10.748 "req_id": 1 00:33:10.748 } 00:33:10.748 Got JSON-RPC error response 00:33:10.748 response: 00:33:10.748 { 00:33:10.748 "code": -13, 00:33:10.748 "message": "Permission denied" 00:33:10.748 } 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.748 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.007 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:33:11.007 12:16:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:11.948 rmmod nvme_tcp 00:33:11.948 rmmod nvme_fabrics 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 1518548 ']' 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 1518548 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1518548 ']' 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1518548 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1518548 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1518548' 00:33:11.948 killing process with pid 1518548 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1518548 00:33:11.948 12:16:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1518548 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:12.210 12:16:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:14.124 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:33:14.385 12:16:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:17.689 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:17.689 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:17.950 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:17.950 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:17.950 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:17.950 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:17.950 12:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.hFD /tmp/spdk.key-null.CNp /tmp/spdk.key-sha256.vwp /tmp/spdk.key-sha384.qHi /tmp/spdk.key-sha512.Kq8 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:17.950 12:16:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:22.155 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:33:22.155 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:33:22.155 00:33:22.155 real 0m58.487s 00:33:22.155 user 0m52.863s 00:33:22.155 sys 0m16.149s 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.155 ************************************ 00:33:22.155 END TEST nvmf_auth_host 00:33:22.155 ************************************ 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:22.155 ************************************ 00:33:22.155 START TEST nvmf_digest 00:33:22.155 ************************************ 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:22.155 * Looking for test storage... 00:33:22.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:33:22.155 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.156 --rc genhtml_branch_coverage=1 00:33:22.156 --rc genhtml_function_coverage=1 00:33:22.156 --rc genhtml_legend=1 00:33:22.156 --rc geninfo_all_blocks=1 00:33:22.156 --rc geninfo_unexecuted_blocks=1 00:33:22.156 00:33:22.156 ' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.156 --rc genhtml_branch_coverage=1 00:33:22.156 --rc genhtml_function_coverage=1 00:33:22.156 --rc genhtml_legend=1 00:33:22.156 --rc geninfo_all_blocks=1 00:33:22.156 --rc geninfo_unexecuted_blocks=1 00:33:22.156 00:33:22.156 ' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.156 --rc genhtml_branch_coverage=1 00:33:22.156 --rc genhtml_function_coverage=1 00:33:22.156 --rc genhtml_legend=1 00:33:22.156 --rc geninfo_all_blocks=1 00:33:22.156 --rc geninfo_unexecuted_blocks=1 00:33:22.156 00:33:22.156 ' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:22.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:22.156 --rc genhtml_branch_coverage=1 00:33:22.156 --rc genhtml_function_coverage=1 00:33:22.156 --rc genhtml_legend=1 00:33:22.156 --rc geninfo_all_blocks=1 00:33:22.156 --rc geninfo_unexecuted_blocks=1 00:33:22.156 00:33:22.156 ' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:22.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:22.156 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:33:22.157 12:16:46 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:30.311 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:33:30.312 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:33:30.312 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:30.312 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:33:30.313 Found net devices under 0000:4b:00.0: cvl_0_0 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:33:30.313 Found net devices under 0000:4b:00.1: cvl_0_1 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@247 -- # create_target_ns 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:30.313 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:30.314 12:16:53 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:30.314 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:30.314 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:30.314 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:30.315 10.0.0.1 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:30.315 10.0.0.2 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:30.315 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:30.316 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:30.317 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:30.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:30.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.606 ms 00:33:30.317 00:33:30.317 --- 10.0.0.1 ping statistics --- 00:33:30.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.317 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:33:30.320 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:30.320 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:30.320 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:30.320 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.320 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:30.321 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:30.321 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:33:30.321 00:33:30.321 --- 10.0.0.2 ping statistics --- 00:33:30.321 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:30.321 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:33:30.321 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:30.322 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:30.323 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:33:30.324 ' 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:30.324 ************************************ 00:33:30.324 START TEST nvmf_digest_clean 00:33:30.324 ************************************ 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.324 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=1535423 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 1535423 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1535423 ']' 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:30.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.325 12:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.325 [2024-12-05 12:16:54.558131] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:30.325 [2024-12-05 12:16:54.558190] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:30.325 [2024-12-05 12:16:54.657408] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.325 [2024-12-05 12:16:54.709635] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:30.325 [2024-12-05 12:16:54.709683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:30.325 [2024-12-05 12:16:54.709692] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:30.325 [2024-12-05 12:16:54.709699] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:30.325 [2024-12-05 12:16:54.709706] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:30.325 [2024-12-05 12:16:54.710522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.589 null0 00:33:30.589 [2024-12-05 12:16:55.512190] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:30.589 [2024-12-05 12:16:55.536486] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1535622 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1535622 /var/tmp/bperf.sock 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1535622 ']' 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:30.589 12:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:30.589 [2024-12-05 12:16:55.597320] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:30.589 [2024-12-05 12:16:55.597385] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1535622 ] 00:33:30.849 [2024-12-05 12:16:55.689831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.849 [2024-12-05 12:16:55.742410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.418 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.418 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:31.418 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:31.418 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:31.418 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:31.679 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.679 12:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.939 nvme0n1 00:33:32.200 12:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:32.200 12:16:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:32.200 Running I/O for 2 seconds... 00:33:34.081 18386.00 IOPS, 71.82 MiB/s [2024-12-05T11:16:59.130Z] 19071.00 IOPS, 74.50 MiB/s 00:33:34.081 Latency(us) 00:33:34.081 [2024-12-05T11:16:59.130Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.081 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:34.081 nvme0n1 : 2.01 19083.10 74.54 0.00 0.00 6697.66 3153.92 15619.41 00:33:34.081 [2024-12-05T11:16:59.130Z] =================================================================================================================== 00:33:34.081 [2024-12-05T11:16:59.131Z] Total : 19083.10 74.54 0.00 0.00 6697.66 3153.92 15619.41 00:33:34.082 { 00:33:34.082 "results": [ 00:33:34.082 { 00:33:34.082 "job": "nvme0n1", 00:33:34.082 "core_mask": "0x2", 00:33:34.082 "workload": "randread", 00:33:34.082 "status": "finished", 00:33:34.082 "queue_depth": 128, 00:33:34.082 "io_size": 4096, 00:33:34.082 "runtime": 2.005439, 00:33:34.082 "iops": 19083.103500031662, 00:33:34.082 "mibps": 74.54337304699868, 00:33:34.082 "io_failed": 0, 00:33:34.082 "io_timeout": 0, 00:33:34.082 "avg_latency_us": 6697.6583642539845, 00:33:34.082 "min_latency_us": 3153.92, 00:33:34.082 "max_latency_us": 15619.413333333334 00:33:34.082 } 00:33:34.082 ], 00:33:34.082 "core_count": 1 00:33:34.082 } 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:34.341 | select(.opcode=="crc32c") 00:33:34.341 | "\(.module_name) \(.executed)"' 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1535622 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1535622 ']' 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1535622 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535622 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535622' 00:33:34.341 killing process with pid 1535622 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1535622 00:33:34.341 Received shutdown signal, test time was about 2.000000 seconds 00:33:34.341 00:33:34.341 Latency(us) 00:33:34.341 [2024-12-05T11:16:59.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:34.341 [2024-12-05T11:16:59.390Z] =================================================================================================================== 00:33:34.341 [2024-12-05T11:16:59.390Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:34.341 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1535622 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1536448 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1536448 /var/tmp/bperf.sock 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1536448 ']' 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:34.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:34.601 12:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:34.601 [2024-12-05 12:16:59.541303] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:34.601 [2024-12-05 12:16:59.541360] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1536448 ] 00:33:34.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:34.601 Zero copy mechanism will not be used. 00:33:34.601 [2024-12-05 12:16:59.623780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:34.861 [2024-12-05 12:16:59.653249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:35.431 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:35.431 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:35.431 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:35.431 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:35.431 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:35.693 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.693 12:17:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:35.954 nvme0n1 00:33:36.214 12:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:36.214 12:17:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:36.215 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:36.215 Zero copy mechanism will not be used. 00:33:36.215 Running I/O for 2 seconds... 00:33:38.097 2944.00 IOPS, 368.00 MiB/s [2024-12-05T11:17:03.146Z] 2987.00 IOPS, 373.38 MiB/s 00:33:38.097 Latency(us) 00:33:38.097 [2024-12-05T11:17:03.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.097 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:38.097 nvme0n1 : 2.05 2927.05 365.88 0.00 0.00 5361.78 515.41 48278.19 00:33:38.097 [2024-12-05T11:17:03.146Z] =================================================================================================================== 00:33:38.097 [2024-12-05T11:17:03.146Z] Total : 2927.05 365.88 0.00 0.00 5361.78 515.41 48278.19 00:33:38.097 { 00:33:38.097 "results": [ 00:33:38.097 { 00:33:38.097 "job": "nvme0n1", 00:33:38.097 "core_mask": "0x2", 00:33:38.097 "workload": "randread", 00:33:38.097 "status": "finished", 00:33:38.097 "queue_depth": 16, 00:33:38.097 "io_size": 131072, 00:33:38.097 "runtime": 2.046431, 00:33:38.097 "iops": 2927.0471371866433, 00:33:38.097 "mibps": 365.8808921483304, 00:33:38.097 "io_failed": 0, 00:33:38.097 "io_timeout": 0, 00:33:38.097 "avg_latency_us": 5361.782811352254, 00:33:38.097 "min_latency_us": 515.4133333333333, 00:33:38.097 "max_latency_us": 48278.18666666667 00:33:38.097 } 00:33:38.097 ], 00:33:38.097 "core_count": 1 00:33:38.097 } 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:38.359 | select(.opcode=="crc32c") 00:33:38.359 | "\(.module_name) \(.executed)"' 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1536448 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1536448 ']' 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1536448 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1536448 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1536448' 00:33:38.359 killing process with pid 1536448 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1536448 00:33:38.359 Received shutdown signal, test time was about 2.000000 seconds 00:33:38.359 00:33:38.359 Latency(us) 00:33:38.359 [2024-12-05T11:17:03.408Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:38.359 [2024-12-05T11:17:03.408Z] =================================================================================================================== 00:33:38.359 [2024-12-05T11:17:03.408Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:38.359 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1536448 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1537137 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1537137 /var/tmp/bperf.sock 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1537137 ']' 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:38.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:38.620 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:38.620 [2024-12-05 12:17:03.567852] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:38.620 [2024-12-05 12:17:03.567911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537137 ] 00:33:38.620 [2024-12-05 12:17:03.626340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:38.620 [2024-12-05 12:17:03.655887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:38.881 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:38.881 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:38.881 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:38.881 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:38.881 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:39.143 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.143 12:17:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:39.404 nvme0n1 00:33:39.404 12:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:39.404 12:17:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:39.404 Running I/O for 2 seconds... 00:33:41.285 30607.00 IOPS, 119.56 MiB/s [2024-12-05T11:17:06.334Z] 30727.00 IOPS, 120.03 MiB/s 00:33:41.285 Latency(us) 00:33:41.285 [2024-12-05T11:17:06.334Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.285 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:41.285 nvme0n1 : 2.01 30739.83 120.08 0.00 0.00 4158.81 2211.84 11250.35 00:33:41.285 [2024-12-05T11:17:06.334Z] =================================================================================================================== 00:33:41.285 [2024-12-05T11:17:06.334Z] Total : 30739.83 120.08 0.00 0.00 4158.81 2211.84 11250.35 00:33:41.545 { 00:33:41.545 "results": [ 00:33:41.545 { 00:33:41.545 "job": "nvme0n1", 00:33:41.545 "core_mask": "0x2", 00:33:41.545 "workload": "randwrite", 00:33:41.545 "status": "finished", 00:33:41.545 "queue_depth": 128, 00:33:41.545 "io_size": 4096, 00:33:41.545 "runtime": 2.005379, 00:33:41.545 "iops": 30739.825240016973, 00:33:41.545 "mibps": 120.0774423438163, 00:33:41.545 "io_failed": 0, 00:33:41.545 "io_timeout": 0, 00:33:41.545 "avg_latency_us": 4158.805001108498, 00:33:41.545 "min_latency_us": 2211.84, 00:33:41.545 "max_latency_us": 11250.346666666666 00:33:41.545 } 00:33:41.545 ], 00:33:41.545 "core_count": 1 00:33:41.545 } 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:41.545 | select(.opcode=="crc32c") 00:33:41.545 | "\(.module_name) \(.executed)"' 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1537137 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1537137 ']' 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1537137 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:41.545 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537137 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537137' 00:33:41.806 killing process with pid 1537137 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1537137 00:33:41.806 Received shutdown signal, test time was about 2.000000 seconds 00:33:41.806 00:33:41.806 Latency(us) 00:33:41.806 [2024-12-05T11:17:06.855Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:41.806 [2024-12-05T11:17:06.855Z] =================================================================================================================== 00:33:41.806 [2024-12-05T11:17:06.855Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1537137 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:41.806 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1537812 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1537812 /var/tmp/bperf.sock 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1537812 ']' 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:41.807 12:17:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:41.807 [2024-12-05 12:17:06.775012] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:41.807 [2024-12-05 12:17:06.775068] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1537812 ] 00:33:41.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:41.807 Zero copy mechanism will not be used. 00:33:42.066 [2024-12-05 12:17:06.856488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:42.066 [2024-12-05 12:17:06.884886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:42.636 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:42.636 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:33:42.636 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:42.636 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:42.636 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:42.895 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:42.895 12:17:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:43.155 nvme0n1 00:33:43.155 12:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:43.155 12:17:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:43.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:43.155 Zero copy mechanism will not be used. 00:33:43.155 Running I/O for 2 seconds... 00:33:45.477 4013.00 IOPS, 501.62 MiB/s [2024-12-05T11:17:10.526Z] 4231.50 IOPS, 528.94 MiB/s 00:33:45.477 Latency(us) 00:33:45.477 [2024-12-05T11:17:10.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.478 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:45.478 nvme0n1 : 2.01 4227.60 528.45 0.00 0.00 3777.75 1454.08 9666.56 00:33:45.478 [2024-12-05T11:17:10.527Z] =================================================================================================================== 00:33:45.478 [2024-12-05T11:17:10.527Z] Total : 4227.60 528.45 0.00 0.00 3777.75 1454.08 9666.56 00:33:45.478 { 00:33:45.478 "results": [ 00:33:45.478 { 00:33:45.478 "job": "nvme0n1", 00:33:45.478 "core_mask": "0x2", 00:33:45.478 "workload": "randwrite", 00:33:45.478 "status": "finished", 00:33:45.478 "queue_depth": 16, 00:33:45.478 "io_size": 131072, 00:33:45.478 "runtime": 2.006575, 00:33:45.478 "iops": 4227.601759216575, 00:33:45.478 "mibps": 528.4502199020719, 00:33:45.478 "io_failed": 0, 00:33:45.478 "io_timeout": 0, 00:33:45.478 "avg_latency_us": 3777.749217650988, 00:33:45.478 "min_latency_us": 1454.08, 00:33:45.478 "max_latency_us": 9666.56 00:33:45.478 } 00:33:45.478 ], 00:33:45.478 "core_count": 1 00:33:45.478 } 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:45.478 | select(.opcode=="crc32c") 00:33:45.478 | "\(.module_name) \(.executed)"' 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1537812 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1537812 ']' 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1537812 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1537812 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1537812' 00:33:45.478 killing process with pid 1537812 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1537812 00:33:45.478 Received shutdown signal, test time was about 2.000000 seconds 00:33:45.478 00:33:45.478 Latency(us) 00:33:45.478 [2024-12-05T11:17:10.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.478 [2024-12-05T11:17:10.527Z] =================================================================================================================== 00:33:45.478 [2024-12-05T11:17:10.527Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:45.478 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1537812 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1535423 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1535423 ']' 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1535423 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1535423 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1535423' 00:33:45.738 killing process with pid 1535423 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1535423 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1535423 00:33:45.738 00:33:45.738 real 0m16.258s 00:33:45.738 user 0m32.225s 00:33:45.738 sys 0m3.567s 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:45.738 ************************************ 00:33:45.738 END TEST nvmf_digest_clean 00:33:45.738 ************************************ 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:45.738 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:45.999 ************************************ 00:33:45.999 START TEST nvmf_digest_error 00:33:45.999 ************************************ 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=1538525 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 1538525 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1538525 ']' 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:45.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:45.999 12:17:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:45.999 [2024-12-05 12:17:10.894405] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:45.999 [2024-12-05 12:17:10.894491] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:45.999 [2024-12-05 12:17:10.988345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.999 [2024-12-05 12:17:11.022323] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:45.999 [2024-12-05 12:17:11.022352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:45.999 [2024-12-05 12:17:11.022358] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:45.999 [2024-12-05 12:17:11.022363] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:45.999 [2024-12-05 12:17:11.022368] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:45.999 [2024-12-05 12:17:11.022850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:46.941 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:46.941 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:46.941 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.942 [2024-12-05 12:17:11.724786] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.942 null0 00:33:46.942 [2024-12-05 12:17:11.803496] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:46.942 [2024-12-05 12:17:11.827693] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1538870 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1538870 /var/tmp/bperf.sock 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1538870 ']' 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:46.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:46.942 12:17:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:46.942 [2024-12-05 12:17:11.895068] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:46.942 [2024-12-05 12:17:11.895115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1538870 ] 00:33:46.942 [2024-12-05 12:17:11.977172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.203 [2024-12-05 12:17:12.007439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.775 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:47.775 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:47.775 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:47.775 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.036 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:48.036 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.036 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.036 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.036 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.036 12:17:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.297 nvme0n1 00:33:48.297 12:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:48.297 12:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:48.297 12:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.297 12:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:48.297 12:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:48.297 12:17:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:48.559 Running I/O for 2 seconds... 00:33:48.559 [2024-12-05 12:17:13.371862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.559 [2024-12-05 12:17:13.371893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.559 [2024-12-05 12:17:13.371902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.559 [2024-12-05 12:17:13.383323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.559 [2024-12-05 12:17:13.383343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.559 [2024-12-05 12:17:13.383350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.559 [2024-12-05 12:17:13.395009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.559 [2024-12-05 12:17:13.395027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.559 [2024-12-05 12:17:13.395034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.559 [2024-12-05 12:17:13.404341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.404360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.404367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.413458] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.413476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.413483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.422685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.422703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.422709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.431076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.431093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.431100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.439695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.439713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:24018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.439719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.448521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.448538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.448545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.457782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.457800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.457806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.466888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.466906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.466912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.475343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.475361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.475367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.483876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.483894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.483900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.493819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.493836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.493842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.504166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.504184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.504194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.513296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.513313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.513320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.522691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.522708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.522715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.529910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.529926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.529932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.540003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.540021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.540027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.549529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.549546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:18956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.549552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.558799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.558815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.558821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.567655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.567672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:4161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.567678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.576172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.576189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.576196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.585849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.585869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.585875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.594472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.594488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.594494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.560 [2024-12-05 12:17:13.603673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.560 [2024-12-05 12:17:13.603689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.560 [2024-12-05 12:17:13.603695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.612744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.612761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.612767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.622227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.622244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.622251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.631411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.631429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.631435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.639987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.640004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.640011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.649173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.649190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.649196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.657936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.657953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.657959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.667425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.667442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.667448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.676241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.676257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.676264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.685025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.685041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.685048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.692775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.692791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:11772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.692797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.702541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.702558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.702564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.712748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.712764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.712770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.721824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.721840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.721846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.730803] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.823 [2024-12-05 12:17:13.730819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:15409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.823 [2024-12-05 12:17:13.730825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.823 [2024-12-05 12:17:13.738954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.738971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.738980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.748319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.748336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.748343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.759226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.759243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.759249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.768037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.768054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.768060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.776719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.776735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.776742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.785406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.785423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.785429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.794532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.794549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.794555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.803076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.803093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.803099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.812649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.812666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:17738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.812672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.821060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.821080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:23714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.821086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.829613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.829629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.829636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.837714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.837731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.837737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.848685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.848702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.848709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.858144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.858161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.858168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.824 [2024-12-05 12:17:13.867673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:48.824 [2024-12-05 12:17:13.867690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.824 [2024-12-05 12:17:13.867696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.877593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.877609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.877615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.886358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.886375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.886381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.894049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.894065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.894075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.903665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.903682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.903688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.913176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.913193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.913200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.921526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.921543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.921549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.930605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.930622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.930628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.939333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.939350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.939356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.947885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.947902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.947908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.956678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.956695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.956701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.965839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.965855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.965862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.975089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.975108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.975115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.982979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.982995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.983002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:13.992152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:13.992169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:1910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:13.992175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:14.001283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:14.001300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:14.001306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:14.009295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:14.009312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:14.009318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:14.018970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:14.018987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.085 [2024-12-05 12:17:14.018993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.085 [2024-12-05 12:17:14.027849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.085 [2024-12-05 12:17:14.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.027872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.036920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.036936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.036942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.045026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.045042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3790 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.045049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.053675] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.053692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.053698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.063419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.063436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.063442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.072235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.072252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:19817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.072258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.081577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.081594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.081600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.089561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.089578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.089584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.098553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.098570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:6647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.098576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.107413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.107430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.107436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.117217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.117233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.117240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.086 [2024-12-05 12:17:14.125152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.086 [2024-12-05 12:17:14.125169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.086 [2024-12-05 12:17:14.125178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.134858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.134875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.134881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.145628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.145645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.145651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.154462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.154478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.154484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.162511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.162527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.162534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.171189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.171206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.171212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.180248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.180265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:24116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.180271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.189320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.189337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.189343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.199199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.199215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.199221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.208163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.208183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12006 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.347 [2024-12-05 12:17:14.208189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.347 [2024-12-05 12:17:14.216282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.347 [2024-12-05 12:17:14.216299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.216305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.226292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.226309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.226315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.236150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.236168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.236174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.244286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.244303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.244310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.252800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.252817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.252823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.262611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.262628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:8736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.262635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.270810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.270827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.270833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.279866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.279882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.279889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.288982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.288999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.289006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.298195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.298211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:7562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.298218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.307240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.307257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:24702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.307263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.316254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.316271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.316278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.324113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.324130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18024 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.324136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.336346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.336363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.336369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.346009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.346026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.346032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 27772.00 IOPS, 108.48 MiB/s [2024-12-05T11:17:14.397Z] [2024-12-05 12:17:14.355710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.355725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.355731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.363485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.363504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.363511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.373555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.373571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:25425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.373577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.381044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.381061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.381068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.348 [2024-12-05 12:17:14.390384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.348 [2024-12-05 12:17:14.390400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.348 [2024-12-05 12:17:14.390407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.608 [2024-12-05 12:17:14.399464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.608 [2024-12-05 12:17:14.399481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.608 [2024-12-05 12:17:14.399487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.608 [2024-12-05 12:17:14.407923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.608 [2024-12-05 12:17:14.407939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.608 [2024-12-05 12:17:14.407945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.416601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.416618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.416624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.425526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.425543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.425549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.434047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.434064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.434070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.444265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.444282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.444288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.451645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.451661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:1642 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.451667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.461487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.461504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.461511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.470623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.470640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.470646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.479917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.479933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.479940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.488293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.488309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.488315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.496700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.496716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.496723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.506327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.506343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.506349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.513669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.513686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:11184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.513695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.524196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.524213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.524219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.532133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.532150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.532156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.542145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.542162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.542168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.554539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.554556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.554562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.565302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.565320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.565326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.574840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.574856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.574862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.583051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.583069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.583075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.591804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.591821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.591827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.601240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.601263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:8301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.601270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.609551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.609567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.609573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.618773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.618790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.618796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.627131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.627148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.627154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.636058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.636075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:7245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.636081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.609 [2024-12-05 12:17:14.645338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.609 [2024-12-05 12:17:14.645354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:17823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.609 [2024-12-05 12:17:14.645360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.610 [2024-12-05 12:17:14.654291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.610 [2024-12-05 12:17:14.654308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.610 [2024-12-05 12:17:14.654314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.870 [2024-12-05 12:17:14.663013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.663030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.663037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.671260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.671277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.671287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.680426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.680443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.680449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.689450] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.689472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.689478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.697792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.697808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.697815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.707276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.707293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.707299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.715753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.715769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:22579 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.715776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.724465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.724482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.724488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.733382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.733400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:9548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.733406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.741682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.741698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.741705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.751082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.751102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.751109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.759555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.759572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.759579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.769121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.769138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.769144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.776637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.776654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.776660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.786620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.786643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.795485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.795502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.795508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.805232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.805249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.805255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.813164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.813181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.813187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.822986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.823003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.823009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.833336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.833353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.833359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.841612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.841629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.841636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.851495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.851512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.851518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.860762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.860779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.860786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.869495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.869511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.869517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.878575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.878592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.878599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.888428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.871 [2024-12-05 12:17:14.888445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23607 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.871 [2024-12-05 12:17:14.888451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.871 [2024-12-05 12:17:14.896051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.872 [2024-12-05 12:17:14.896068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.872 [2024-12-05 12:17:14.896074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.872 [2024-12-05 12:17:14.906475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.872 [2024-12-05 12:17:14.906492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.872 [2024-12-05 12:17:14.906501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.872 [2024-12-05 12:17:14.915785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:49.872 [2024-12-05 12:17:14.915802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.872 [2024-12-05 12:17:14.915808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.133 [2024-12-05 12:17:14.923608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.133 [2024-12-05 12:17:14.923625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:11345 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.133 [2024-12-05 12:17:14.923631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.133 [2024-12-05 12:17:14.932850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.932866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.932873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.941750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.941766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.941773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.950801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.950818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.950824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.960015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.960032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.960038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.969529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.969546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.969552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.977379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.977397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.977403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.987443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.987468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.987474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:14.997602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:14.997620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:14.997626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.006730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.006746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.006753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.015134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.015152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.015160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.023326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.023343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.023350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.032768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.032786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.032792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.042491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.042508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.042515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.052475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.052493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.052499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.061141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.061158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.061165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.070057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.070074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:21129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.070081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.079862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.079879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.079885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.087890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.087907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.087913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.097235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.097252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.097258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.106787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.106805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:20973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.106811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.116063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.116080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:10292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.116086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.124311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.124328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3799 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.124334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.133701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.133718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:2612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.133724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.142156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.142173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.142182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.150352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.150369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.150375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.160006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.134 [2024-12-05 12:17:15.160023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.134 [2024-12-05 12:17:15.160029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.134 [2024-12-05 12:17:15.168466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.135 [2024-12-05 12:17:15.168484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.135 [2024-12-05 12:17:15.168491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.135 [2024-12-05 12:17:15.178108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.135 [2024-12-05 12:17:15.178125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.135 [2024-12-05 12:17:15.178131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.186953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.186971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.186977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.196379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.196396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.196403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.205468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.205485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.205492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.213548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.213565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.213571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.223726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.223743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.223749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.231483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.231500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.231508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.241368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.241386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11719 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.241392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.249725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.249741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:13610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.249748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.258649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.258667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.258673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.267830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.267847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:23289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.267853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.279636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.279653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.279659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.287681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.287698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.287704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.297231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.297247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:10746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.297256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.305627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.305644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.305650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.314864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.314881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.314887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.323763] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.323780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:13937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.323786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.331899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.331916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.331922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.341374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.341391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:15458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.341397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 [2024-12-05 12:17:15.349070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.349087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:14096 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.349093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 28001.00 IOPS, 109.38 MiB/s [2024-12-05T11:17:15.444Z] [2024-12-05 12:17:15.359223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1029350) 00:33:50.395 [2024-12-05 12:17:15.359240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.395 [2024-12-05 12:17:15.359247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.395 00:33:50.395 Latency(us) 00:33:50.395 [2024-12-05T11:17:15.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.395 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:50.395 nvme0n1 : 2.00 28011.81 109.42 0.00 0.00 4563.81 2416.64 17257.81 00:33:50.395 [2024-12-05T11:17:15.444Z] =================================================================================================================== 00:33:50.395 [2024-12-05T11:17:15.444Z] Total : 28011.81 109.42 0.00 0.00 4563.81 2416.64 17257.81 00:33:50.395 { 00:33:50.395 "results": [ 00:33:50.395 { 00:33:50.395 "job": "nvme0n1", 00:33:50.395 "core_mask": "0x2", 00:33:50.395 "workload": "randread", 00:33:50.395 "status": "finished", 00:33:50.395 "queue_depth": 128, 00:33:50.395 "io_size": 4096, 00:33:50.395 "runtime": 2.003798, 00:33:50.395 "iops": 28011.805581201297, 00:33:50.395 "mibps": 109.42111555156757, 00:33:50.395 "io_failed": 0, 00:33:50.395 "io_timeout": 0, 00:33:50.395 "avg_latency_us": 4563.80946517014, 00:33:50.395 "min_latency_us": 2416.64, 00:33:50.395 "max_latency_us": 17257.81333333333 00:33:50.395 } 00:33:50.395 ], 00:33:50.395 "core_count": 1 00:33:50.395 } 00:33:50.395 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:50.395 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:50.395 | .driver_specific 00:33:50.395 | .nvme_error 00:33:50.395 | .status_code 00:33:50.395 | .command_transient_transport_error' 00:33:50.395 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:50.395 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1538870 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1538870 ']' 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1538870 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1538870 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1538870' 00:33:50.655 killing process with pid 1538870 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1538870 00:33:50.655 Received shutdown signal, test time was about 2.000000 seconds 00:33:50.655 00:33:50.655 Latency(us) 00:33:50.655 [2024-12-05T11:17:15.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.655 [2024-12-05T11:17:15.704Z] =================================================================================================================== 00:33:50.655 [2024-12-05T11:17:15.704Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:50.655 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1538870 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1539560 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1539560 /var/tmp/bperf.sock 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1539560 ']' 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:50.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:50.916 [2024-12-05 12:17:15.780922] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:50.916 [2024-12-05 12:17:15.780978] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539560 ] 00:33:50.916 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:50.916 Zero copy mechanism will not be used. 00:33:50.916 [2024-12-05 12:17:15.839675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:50.916 [2024-12-05 12:17:15.869088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:50.916 12:17:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:51.176 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:51.176 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.176 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.176 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.176 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.176 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.750 nvme0n1 00:33:51.750 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:51.750 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:51.750 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.750 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:51.750 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:51.750 12:17:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:51.750 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:51.750 Zero copy mechanism will not be used. 00:33:51.750 Running I/O for 2 seconds... 00:33:51.750 [2024-12-05 12:17:16.636646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.636679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.636688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.645944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.645966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.645973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.654267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.654287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.654293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.662443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.662467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.662473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.673706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.673725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.673731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.681829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.681847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.681853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.692375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.692394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.692401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.705544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.705562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.705568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.717805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.717823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.717830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.730726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.730744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.730751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.743761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.743780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.743786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.756069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.756087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.756094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.768674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.768693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.768700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.781249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.781267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.781274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:51.750 [2024-12-05 12:17:16.793025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:51.750 [2024-12-05 12:17:16.793043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:51.750 [2024-12-05 12:17:16.793050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.804313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.013 [2024-12-05 12:17:16.804332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.013 [2024-12-05 12:17:16.804339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.815689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.013 [2024-12-05 12:17:16.815707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.013 [2024-12-05 12:17:16.815713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.823996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.013 [2024-12-05 12:17:16.824015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.013 [2024-12-05 12:17:16.824025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.830696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.013 [2024-12-05 12:17:16.830714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.013 [2024-12-05 12:17:16.830720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.835570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.013 [2024-12-05 12:17:16.835588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.013 [2024-12-05 12:17:16.835595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.844179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.013 [2024-12-05 12:17:16.844197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.013 [2024-12-05 12:17:16.844204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.013 [2024-12-05 12:17:16.852776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.852794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.852801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.861571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.861590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.861596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.871467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.871486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.871492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.875929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.875948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.875954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.880550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.880569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.880577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.890344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.890362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.890368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.897534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.897552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.897558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.908293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.908311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.908317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.918645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.918663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.918669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.931021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.931040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.931046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.942703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.942721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.942727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.954313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.954332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.954338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.966197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.966215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.966221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.978729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.978747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.978757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:16.990677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:16.990695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:16.990702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:17.003543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:17.003561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:17.003568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:17.016345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:17.016363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:17.016370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:17.028673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:17.028691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:17.028697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:17.039333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:17.039350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:17.039357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:17.050505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:17.050522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:17.050528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.014 [2024-12-05 12:17:17.060862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.014 [2024-12-05 12:17:17.060880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.014 [2024-12-05 12:17:17.060887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.067378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.067397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.067403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.074373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.074394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.074400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.081303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.081321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.081327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.088236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.088254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.088260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.095182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.095199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.095205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.103920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.103938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.103944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.108592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.108610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.108616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.113123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.113141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.113147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.117531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.117549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.117555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.124275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.124293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.124300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.128709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.128727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.128734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.135356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.135373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.135379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.143570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.143588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.143594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.148272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.148290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.148296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.152750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.152767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.152774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.162863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.162881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.162887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.172845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.172863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.172869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.177066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.177083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.177089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.181479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.181497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.181507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.185938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.185956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.185962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.190501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.276 [2024-12-05 12:17:17.190519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.276 [2024-12-05 12:17:17.190525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.276 [2024-12-05 12:17:17.202588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.202606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.202612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.212530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.212548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.212554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.222065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.222083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.222089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.228445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.228467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.228473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.236079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.236097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.236103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.243558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.243575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.243582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.252988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.253009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.253015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.257391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.257409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.257415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.266478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.266496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.266502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.274292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.274310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.274316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.284121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.284139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.284145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.296604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.296623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.296629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.308502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.308520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.308527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.277 [2024-12-05 12:17:17.318044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.277 [2024-12-05 12:17:17.318062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.277 [2024-12-05 12:17:17.318068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.325418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.325437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.325443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.330304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.330323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.330329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.335322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.335340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.335346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.343385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.343403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.343409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.354185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.354204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.354210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.363978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.363996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.364002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.370078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.370096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.370102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.374516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.374533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.374540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.382571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.382589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.382595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.391791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.391809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.391818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.402371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.402389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.402396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.414452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.414475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.414481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.423995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.424013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.424020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.435008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.435025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.435031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.446439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.446461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.446467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.457489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.457506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.457513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.469528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.469546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.469552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.478323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.478342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.478348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.486823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.486841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.486848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.493615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.493633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.545 [2024-12-05 12:17:17.493640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.545 [2024-12-05 12:17:17.504882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.545 [2024-12-05 12:17:17.504899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.504906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.514588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.514606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.514613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.522517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.522534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.522541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.528110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.528128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.528134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.532582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.532600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.532606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.538429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.538447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.538457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.544715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.544733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.544743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.549129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.549146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.549152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.553581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.553599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.553605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.562411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.562430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.562437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.568491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.568510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.568516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.576599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.576617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.576623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.582215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.582232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.582240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.584743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.584760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.584766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.546 [2024-12-05 12:17:17.592604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.546 [2024-12-05 12:17:17.592622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.546 [2024-12-05 12:17:17.592628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.601179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.601200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.601206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.605662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.605680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.605686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.610509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.610527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.610533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.617087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.617106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.617112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.625814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.625833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.625840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 3615.00 IOPS, 451.88 MiB/s [2024-12-05T11:17:17.974Z] [2024-12-05 12:17:17.634269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.634284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.634291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.640287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.640305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.640311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.651106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.651123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.651129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.663387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.663406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.663412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.675131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.675149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.675155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.684634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.684652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.684658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.694348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.694366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.694372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.699925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.699942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.699948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.709677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.709694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.709700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.718296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.718314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.718321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.723318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.723336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.723343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.731298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.731316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.731323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.736538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.736556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.736566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.745904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.745921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.745928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.750822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.750838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.750845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.755975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.755992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.755999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.760097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.760116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.760122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.767827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.767846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.767852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.773386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.773404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.773410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.781770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.925 [2024-12-05 12:17:17.781789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.925 [2024-12-05 12:17:17.781795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.925 [2024-12-05 12:17:17.790529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.790547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.790554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.801445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.801472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.801478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.813841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.813859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.813865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.822045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.822063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.822069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.833963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.833981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.833987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.842727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.842745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.842752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.854124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.854141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.854147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.865437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.865460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.865466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.877820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.877838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.877844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.888084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.888102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.888108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.895919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.895937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.895943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.901490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.901507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.901513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.905859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.905877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.905883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.914277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.914295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.914301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.919167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.919184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.919190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.926683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.926702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.926708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.937272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.937290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.937297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.942331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.942349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.942355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.949725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.949742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.949753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.954416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.954434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.954440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:52.926 [2024-12-05 12:17:17.956931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:52.926 [2024-12-05 12:17:17.956948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:52.926 [2024-12-05 12:17:17.956954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:17.963588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:17.963605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:17.963611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:17.968258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:17.968275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:17.968281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:17.976073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:17.976090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:17.976096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:17.980586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:17.980602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:17.980608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:17.987601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:17.987617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:17.987623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:17.997818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:17.997835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:17.997841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:18.003220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:18.003240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:18.003246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:18.014319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:18.014336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:18.014342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:18.021416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:18.021433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:18.021440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:18.033259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:18.033277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:18.033284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.258 [2024-12-05 12:17:18.045547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.258 [2024-12-05 12:17:18.045565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.258 [2024-12-05 12:17:18.045571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.055291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.055309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.055316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.065061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.065079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.065086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.069700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.069718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.069724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.074112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.074130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.074136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.078428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.078446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.078458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.083468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.083486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.083492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.091284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.091303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.091309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.102900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.102918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.102925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.112799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.112818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.112824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.121699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.121718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.121724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.133279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.133297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.133303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.142152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.142170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.142176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.146499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.146516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.146525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.152148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.152166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.152172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.156693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.156711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.156718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.164953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.164971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.164978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.169439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.169464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.169470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.176536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.176555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.176561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.183053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.183071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.183077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.187412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.187431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.187437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.191768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.191786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.191792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.196347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.196365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.196372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.200644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.200662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.200668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.206685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.206703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.206709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.217995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.218014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.218020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.226003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.226021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.226028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.259 [2024-12-05 12:17:18.230406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.259 [2024-12-05 12:17:18.230425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.259 [2024-12-05 12:17:18.230431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.236714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.236732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.236738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.245084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.245102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.245109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.257125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.257143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.257152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.266964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.266983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.266989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.278342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.278360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.278366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.290960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.290978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.290985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.300658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.300676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.300682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.260 [2024-12-05 12:17:18.305600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.260 [2024-12-05 12:17:18.305619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.260 [2024-12-05 12:17:18.305626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.315183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.315202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.315208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.322281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.322299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.322305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.330397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.330415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.330422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.342054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.342076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.342082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.353128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.353153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.365397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.365415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.365421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.377487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.377506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.377512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.390293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.521 [2024-12-05 12:17:18.390311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.521 [2024-12-05 12:17:18.390318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.521 [2024-12-05 12:17:18.402571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.402588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.402595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.414768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.414786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.414792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.427700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.427718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.427724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.440050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.440068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.440074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.451256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.451274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.451281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.456639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.456657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.456663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.466290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.466308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.466315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.471204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.471222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.471229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.475655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.475673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.475679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.481020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.481039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.481045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.485449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.485480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.485486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.489822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.489840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.489847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.494405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.494423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.494433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.502078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.502096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.502102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.511843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.511861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.511867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.521492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.521511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.521517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.531076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.531094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.531100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.542545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.542563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.542569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.553481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.553499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.553506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.522 [2024-12-05 12:17:18.564989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.522 [2024-12-05 12:17:18.565007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.522 [2024-12-05 12:17:18.565014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.577192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.577211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.577217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.587212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.587233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.587239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.593944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.593963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.593969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.601119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.601136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.601142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.605724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.605742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.605749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.610519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.610538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.610544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.617431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.617460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.622236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.622254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.622261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:53.799 [2024-12-05 12:17:18.626609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.626627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.626633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:53.799 3736.00 IOPS, 467.00 MiB/s [2024-12-05T11:17:18.848Z] [2024-12-05 12:17:18.636969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xa11570) 00:33:53.799 [2024-12-05 12:17:18.636987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:53.799 [2024-12-05 12:17:18.636994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:53.799 00:33:53.799 Latency(us) 00:33:53.799 [2024-12-05T11:17:18.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:53.799 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:53.799 nvme0n1 : 2.00 3738.31 467.29 0.00 0.00 4274.91 505.17 13817.17 00:33:53.799 [2024-12-05T11:17:18.848Z] =================================================================================================================== 00:33:53.799 [2024-12-05T11:17:18.848Z] Total : 3738.31 467.29 0.00 0.00 4274.91 505.17 13817.17 00:33:53.799 { 00:33:53.799 "results": [ 00:33:53.799 { 00:33:53.799 "job": "nvme0n1", 00:33:53.799 "core_mask": "0x2", 00:33:53.799 "workload": "randread", 00:33:53.799 "status": "finished", 00:33:53.799 "queue_depth": 16, 00:33:53.799 "io_size": 131072, 00:33:53.799 "runtime": 2.003043, 00:33:53.799 "iops": 3738.3121580515244, 00:33:53.799 "mibps": 467.28901975644055, 00:33:53.799 "io_failed": 0, 00:33:53.799 "io_timeout": 0, 00:33:53.799 "avg_latency_us": 4274.907578347578, 00:33:53.799 "min_latency_us": 505.17333333333335, 00:33:53.799 "max_latency_us": 13817.173333333334 00:33:53.799 } 00:33:53.799 ], 00:33:53.799 "core_count": 1 00:33:53.799 } 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:53.799 | .driver_specific 00:33:53.799 | .nvme_error 00:33:53.799 | .status_code 00:33:53.799 | .command_transient_transport_error' 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 242 > 0 )) 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1539560 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1539560 ']' 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1539560 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:53.799 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539560 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539560' 00:33:54.059 killing process with pid 1539560 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1539560 00:33:54.059 Received shutdown signal, test time was about 2.000000 seconds 00:33:54.059 00:33:54.059 Latency(us) 00:33:54.059 [2024-12-05T11:17:19.108Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.059 [2024-12-05T11:17:19.108Z] =================================================================================================================== 00:33:54.059 [2024-12-05T11:17:19.108Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1539560 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1540239 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1540239 /var/tmp/bperf.sock 00:33:54.059 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1540239 ']' 00:33:54.060 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:54.060 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.060 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:54.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:54.060 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.060 12:17:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.060 [2024-12-05 12:17:19.013672] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:54.060 [2024-12-05 12:17:19.013716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540239 ] 00:33:54.060 [2024-12-05 12:17:19.063457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.060 [2024-12-05 12:17:19.093051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.319 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:54.578 nvme0n1 00:33:54.838 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:54.838 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.838 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:54.838 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.838 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:54.838 12:17:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:54.838 Running I/O for 2 seconds... 00:33:54.838 [2024-12-05 12:17:19.761408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee0630 00:33:54.838 [2024-12-05 12:17:19.762451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.762480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.770032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1710 00:33:54.838 [2024-12-05 12:17:19.771125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:17356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.771143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.778524] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee27f0 00:33:54.838 [2024-12-05 12:17:19.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.779633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.786976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee38d0 00:33:54.838 [2024-12-05 12:17:19.788060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:17438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.788077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.795462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee49b0 00:33:54.838 [2024-12-05 12:17:19.796514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:5739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.796530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.803924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee5a90 00:33:54.838 [2024-12-05 12:17:19.804999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17523 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.805014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.812389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eec840 00:33:54.838 [2024-12-05 12:17:19.813419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.813435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.820851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eed920 00:33:54.838 [2024-12-05 12:17:19.821925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.821941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.829291] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:54.838 [2024-12-05 12:17:19.830375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.830391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.837741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eefae0 00:33:54.838 [2024-12-05 12:17:19.838768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.838785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.846157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef0bc0 00:33:54.838 [2024-12-05 12:17:19.847220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.847236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.854588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef1ca0 00:33:54.838 [2024-12-05 12:17:19.855635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.855651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.863021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef2d80 00:33:54.838 [2024-12-05 12:17:19.864110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.864125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.871427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3e60 00:33:54.838 [2024-12-05 12:17:19.872512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.872528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:54.838 [2024-12-05 12:17:19.879844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4f40 00:33:54.838 [2024-12-05 12:17:19.880985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.838 [2024-12-05 12:17:19.881001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.889394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee73e0 00:33:55.099 [2024-12-05 12:17:19.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.890912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.895457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef0ff8 00:33:55.099 [2024-12-05 12:17:19.896175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.896193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.904789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7538 00:33:55.099 [2024-12-05 12:17:19.905716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.905732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.913184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef0350 00:33:55.099 [2024-12-05 12:17:19.914123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.914139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.921730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7100 00:33:55.099 [2024-12-05 12:17:19.922657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.922673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.930137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef6020 00:33:55.099 [2024-12-05 12:17:19.931087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.931103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.937994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eed4e8 00:33:55.099 [2024-12-05 12:17:19.938940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.938955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.947308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:55.099 [2024-12-05 12:17:19.948348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.948363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.955895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efbcf0 00:33:55.099 [2024-12-05 12:17:19.956965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.956981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.964314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efac10 00:33:55.099 [2024-12-05 12:17:19.965377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.965393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.972730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:55.099 [2024-12-05 12:17:19.973811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:23853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.973826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.981155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef8a50 00:33:55.099 [2024-12-05 12:17:19.982238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.982254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.989584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7970 00:33:55.099 [2024-12-05 12:17:19.990617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.990633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:19.998021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4b08 00:33:55.099 [2024-12-05 12:17:19.999098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:20381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:19.999114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.006352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efcdd0 00:33:55.099 [2024-12-05 12:17:20.007587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:13121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.007604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.014094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef31b8 00:33:55.099 [2024-12-05 12:17:20.014799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.014815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.023754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef6020 00:33:55.099 [2024-12-05 12:17:20.024904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.024919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.031669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1b48 00:33:55.099 [2024-12-05 12:17:20.032451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:16598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.032470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.040261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeee38 00:33:55.099 [2024-12-05 12:17:20.040915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.040930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.048840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eed920 00:33:55.099 [2024-12-05 12:17:20.049774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.049789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.057255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6300 00:33:55.099 [2024-12-05 12:17:20.058190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:22673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.058205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.065939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef2d80 00:33:55.099 [2024-12-05 12:17:20.066670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:15295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.066685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.074509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef1430 00:33:55.099 [2024-12-05 12:17:20.075538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.075553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.082960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef92c0 00:33:55.099 [2024-12-05 12:17:20.084016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:16327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.084032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.090841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1f80 00:33:55.099 [2024-12-05 12:17:20.091849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.099 [2024-12-05 12:17:20.091864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:55.099 [2024-12-05 12:17:20.098701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee5ec8 00:33:55.100 [2024-12-05 12:17:20.099357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:22580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.100 [2024-12-05 12:17:20.099373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.100 [2024-12-05 12:17:20.107018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee4de8 00:33:55.100 [2024-12-05 12:17:20.107658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.100 [2024-12-05 12:17:20.107674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.100 [2024-12-05 12:17:20.115427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:55.100 [2024-12-05 12:17:20.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.100 [2024-12-05 12:17:20.116141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.100 [2024-12-05 12:17:20.123852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efc998 00:33:55.100 [2024-12-05 12:17:20.124404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9038 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.100 [2024-12-05 12:17:20.124421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:55.100 [2024-12-05 12:17:20.132566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eee190 00:33:55.100 [2024-12-05 12:17:20.133360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.100 [2024-12-05 12:17:20.133375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:55.100 [2024-12-05 12:17:20.141136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee49b0 00:33:55.100 [2024-12-05 12:17:20.141948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.100 [2024-12-05 12:17:20.141963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.149554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee5a90 00:33:55.361 [2024-12-05 12:17:20.150374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:17264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.150390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.157981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eed0b0 00:33:55.361 [2024-12-05 12:17:20.158802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.158818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.166396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee12d8 00:33:55.361 [2024-12-05 12:17:20.167158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.167173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.174816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef8a50 00:33:55.361 [2024-12-05 12:17:20.175609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.175624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.183255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeaef0 00:33:55.361 [2024-12-05 12:17:20.184044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.184060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.191701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee9e10 00:33:55.361 [2024-12-05 12:17:20.192517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:9200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.192533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.200112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4f40 00:33:55.361 [2024-12-05 12:17:20.200916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.200932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.208559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6fa8 00:33:55.361 [2024-12-05 12:17:20.209353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.209369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.216967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eecc78 00:33:55.361 [2024-12-05 12:17:20.217736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.217751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.225387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee84c0 00:33:55.361 [2024-12-05 12:17:20.226190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.226206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.233824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef31b8 00:33:55.361 [2024-12-05 12:17:20.234616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.234631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.242248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef6020 00:33:55.361 [2024-12-05 12:17:20.243064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.243080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.250682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efa7d8 00:33:55.361 [2024-12-05 12:17:20.251448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:7758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.251467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.259112] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef0788 00:33:55.361 [2024-12-05 12:17:20.259895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.259911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.267542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef1ca0 00:33:55.361 [2024-12-05 12:17:20.268339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:15870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.268354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.278005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1f80 00:33:55.361 [2024-12-05 12:17:20.279366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.279381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.284200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7970 00:33:55.361 [2024-12-05 12:17:20.284964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.284979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.293633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3e60 00:33:55.361 [2024-12-05 12:17:20.294515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.294530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.302054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee0a68 00:33:55.361 [2024-12-05 12:17:20.302994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.303010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.310482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efb048 00:33:55.361 [2024-12-05 12:17:20.311422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:6475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.311437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.318930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edf118 00:33:55.361 [2024-12-05 12:17:20.319860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:2094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.319876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.327360] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eebfd0 00:33:55.361 [2024-12-05 12:17:20.328314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.361 [2024-12-05 12:17:20.328330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.361 [2024-12-05 12:17:20.335800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef1868 00:33:55.362 [2024-12-05 12:17:20.336706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.336725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.344220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eedd58 00:33:55.362 [2024-12-05 12:17:20.345165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.345181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.352648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6738 00:33:55.362 [2024-12-05 12:17:20.353527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.353543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.361092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee5658 00:33:55.362 [2024-12-05 12:17:20.362020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:20370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.362036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.369539] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee4578 00:33:55.362 [2024-12-05 12:17:20.370482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.370498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.377963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efc998 00:33:55.362 [2024-12-05 12:17:20.378893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.378909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.386376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4b08 00:33:55.362 [2024-12-05 12:17:20.387321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:9136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.387337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.394792] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7970 00:33:55.362 [2024-12-05 12:17:20.395712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.395727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.362 [2024-12-05 12:17:20.403226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe720 00:33:55.362 [2024-12-05 12:17:20.404174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.362 [2024-12-05 12:17:20.404189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.411667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edf988 00:33:55.623 [2024-12-05 12:17:20.412612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.412627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.420093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3498 00:33:55.623 [2024-12-05 12:17:20.421015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.421031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.428509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee23b8 00:33:55.623 [2024-12-05 12:17:20.429438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.429456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.436919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4298 00:33:55.623 [2024-12-05 12:17:20.437824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.437840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.445347] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edfdc0 00:33:55.623 [2024-12-05 12:17:20.446237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.446253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.453825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efbcf0 00:33:55.623 [2024-12-05 12:17:20.454750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.454766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.462272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eff3c8 00:33:55.623 [2024-12-05 12:17:20.463159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:2010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.463175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.470697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef8a50 00:33:55.623 [2024-12-05 12:17:20.471628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.471644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.479109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee12d8 00:33:55.623 [2024-12-05 12:17:20.480031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.480047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.487526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eed0b0 00:33:55.623 [2024-12-05 12:17:20.488457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.488472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.495937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee5a90 00:33:55.623 [2024-12-05 12:17:20.496872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.496887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.504375] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee49b0 00:33:55.623 [2024-12-05 12:17:20.505295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.505311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.512816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eddc00 00:33:55.623 [2024-12-05 12:17:20.513726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:22891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.513741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.521229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef46d0 00:33:55.623 [2024-12-05 12:17:20.522163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19399 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.522179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.529635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7538 00:33:55.623 [2024-12-05 12:17:20.530533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.530549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.538055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3d08 00:33:55.623 [2024-12-05 12:17:20.539001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:7271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.539017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.623 [2024-12-05 12:17:20.546498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efcdd0 00:33:55.623 [2024-12-05 12:17:20.547384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.623 [2024-12-05 12:17:20.547400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.554916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3060 00:33:55.624 [2024-12-05 12:17:20.555845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.555863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.563326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1f80 00:33:55.624 [2024-12-05 12:17:20.564234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:17752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.564249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.571733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3e60 00:33:55.624 [2024-12-05 12:17:20.572654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.572669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.580134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee0a68 00:33:55.624 [2024-12-05 12:17:20.581073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.581088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.588567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efb048 00:33:55.624 [2024-12-05 12:17:20.589500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.589516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.597005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edf118 00:33:55.624 [2024-12-05 12:17:20.597950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:18212 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.597965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.605445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eebfd0 00:33:55.624 [2024-12-05 12:17:20.606329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:23228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.606344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.613856] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef1868 00:33:55.624 [2024-12-05 12:17:20.614741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:8853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.614757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.622264] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eedd58 00:33:55.624 [2024-12-05 12:17:20.623186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.623201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.630683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6738 00:33:55.624 [2024-12-05 12:17:20.631623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.631696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.639282] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee5658 00:33:55.624 [2024-12-05 12:17:20.640216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.640231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.647715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee4578 00:33:55.624 [2024-12-05 12:17:20.648654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:25101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.648669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.656126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efc998 00:33:55.624 [2024-12-05 12:17:20.657034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:20725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.657049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.624 [2024-12-05 12:17:20.664563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4b08 00:33:55.624 [2024-12-05 12:17:20.665468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:11225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.624 [2024-12-05 12:17:20.665483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.884 [2024-12-05 12:17:20.672963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7970 00:33:55.884 [2024-12-05 12:17:20.673901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:6798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.884 [2024-12-05 12:17:20.673916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.884 [2024-12-05 12:17:20.681392] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe720 00:33:55.884 [2024-12-05 12:17:20.682321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.884 [2024-12-05 12:17:20.682336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.884 [2024-12-05 12:17:20.689811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edf988 00:33:55.884 [2024-12-05 12:17:20.690747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.884 [2024-12-05 12:17:20.690763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.884 [2024-12-05 12:17:20.698230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3498 00:33:55.884 [2024-12-05 12:17:20.699164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:15149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.884 [2024-12-05 12:17:20.699181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.706805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee23b8 00:33:55.885 [2024-12-05 12:17:20.707735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3027 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.707750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.715220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4298 00:33:55.885 [2024-12-05 12:17:20.716143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.716158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.723649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edfdc0 00:33:55.885 [2024-12-05 12:17:20.724577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.724593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.732068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efbcf0 00:33:55.885 [2024-12-05 12:17:20.733022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.733038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.740501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eff3c8 00:33:55.885 [2024-12-05 12:17:20.741435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:15848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.741450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.748911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef8a50 00:33:55.885 [2024-12-05 12:17:20.749854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:12317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.749870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 30111.00 IOPS, 117.62 MiB/s [2024-12-05T11:17:20.934Z] [2024-12-05 12:17:20.757313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef1430 00:33:55.885 [2024-12-05 12:17:20.758110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.758125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.766026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6b70 00:33:55.885 [2024-12-05 12:17:20.767063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.767078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.774615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9f68 00:33:55.885 [2024-12-05 12:17:20.775668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.775687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.783060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef2948 00:33:55.885 [2024-12-05 12:17:20.784109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.784125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.791492] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3a28 00:33:55.885 [2024-12-05 12:17:20.792516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.792531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.799886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee8d30 00:33:55.885 [2024-12-05 12:17:20.800949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:4029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.800964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.808296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef92c0 00:33:55.885 [2024-12-05 12:17:20.809352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.809367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.816734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efd640 00:33:55.885 [2024-12-05 12:17:20.817742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.817759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.825169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe2e8 00:33:55.885 [2024-12-05 12:17:20.826207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.826223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.833609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:55.885 [2024-12-05 12:17:20.834657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.834672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.842021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3498 00:33:55.885 [2024-12-05 12:17:20.843069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.843084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.850426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee23b8 00:33:55.885 [2024-12-05 12:17:20.851442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:20550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.851462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.858858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4298 00:33:55.885 [2024-12-05 12:17:20.859877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:11476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.859893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.867278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edfdc0 00:33:55.885 [2024-12-05 12:17:20.868325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.868341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.875712] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efbcf0 00:33:55.885 [2024-12-05 12:17:20.876763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.876779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.884126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee9e10 00:33:55.885 [2024-12-05 12:17:20.885185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.885201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.892623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4f40 00:33:55.885 [2024-12-05 12:17:20.893659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.893674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.901038] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6fa8 00:33:55.885 [2024-12-05 12:17:20.902071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.902087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.909464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efa3a0 00:33:55.885 [2024-12-05 12:17:20.910513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.910529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.917895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef2510 00:33:55.885 [2024-12-05 12:17:20.918938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.885 [2024-12-05 12:17:20.918954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:55.885 [2024-12-05 12:17:20.926319] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef35f0 00:33:55.885 [2024-12-05 12:17:20.927351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:55.886 [2024-12-05 12:17:20.927367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.145 [2024-12-05 12:17:20.934726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee9168 00:33:56.145 [2024-12-05 12:17:20.935732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.145 [2024-12-05 12:17:20.935748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.145 [2024-12-05 12:17:20.943133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee73e0 00:33:56.145 [2024-12-05 12:17:20.944179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:17056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.944194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:20.951542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef96f8 00:33:56.146 [2024-12-05 12:17:20.952588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.952603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:20.959968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efeb58 00:33:56.146 [2024-12-05 12:17:20.961017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.961032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:20.968465] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeee38 00:33:56.146 [2024-12-05 12:17:20.969501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:2552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.969517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:20.976890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee27f0 00:33:56.146 [2024-12-05 12:17:20.977943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4013 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.977958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:20.985299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1710 00:33:56.146 [2024-12-05 12:17:20.986325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.986341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:20.993718] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee38d0 00:33:56.146 [2024-12-05 12:17:20.994724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:20.994742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.002143] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee01f8 00:33:56.146 [2024-12-05 12:17:21.003198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7899 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.003214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.010567] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efb8b8 00:33:56.146 [2024-12-05 12:17:21.011608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.011624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.018992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeb328 00:33:56.146 [2024-12-05 12:17:21.020034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.020050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.027402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eea248 00:33:56.146 [2024-12-05 12:17:21.028457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:18640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.028472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.035821] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef5378 00:33:56.146 [2024-12-05 12:17:21.036872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.036888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.044238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6b70 00:33:56.146 [2024-12-05 12:17:21.045277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.045293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.052665] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9f68 00:33:56.146 [2024-12-05 12:17:21.053680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.053695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.061090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef2948 00:33:56.146 [2024-12-05 12:17:21.062146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.062162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.069505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3a28 00:33:56.146 [2024-12-05 12:17:21.070517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.070533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.077912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee8d30 00:33:56.146 [2024-12-05 12:17:21.078972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:20813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.078987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.086317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef92c0 00:33:56.146 [2024-12-05 12:17:21.087377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.087393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.094762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efd640 00:33:56.146 [2024-12-05 12:17:21.095800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17964 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.095816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.103203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe2e8 00:33:56.146 [2024-12-05 12:17:21.104244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:22960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.104259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.111627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:56.146 [2024-12-05 12:17:21.112665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:23584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.112681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.120028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3498 00:33:56.146 [2024-12-05 12:17:21.121070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.121085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.128433] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee23b8 00:33:56.146 [2024-12-05 12:17:21.129483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.129498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.136860] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4298 00:33:56.146 [2024-12-05 12:17:21.137918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:6132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.137934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.145289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edfdc0 00:33:56.146 [2024-12-05 12:17:21.146335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:18646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.146350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.153723] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efbcf0 00:33:56.146 [2024-12-05 12:17:21.154776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:21778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.154791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.162128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee9e10 00:33:56.146 [2024-12-05 12:17:21.163148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:20021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.146 [2024-12-05 12:17:21.163164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.146 [2024-12-05 12:17:21.170531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef4f40 00:33:56.146 [2024-12-05 12:17:21.171568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.147 [2024-12-05 12:17:21.171583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.147 [2024-12-05 12:17:21.178933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee6fa8 00:33:56.147 [2024-12-05 12:17:21.179985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.147 [2024-12-05 12:17:21.180001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.147 [2024-12-05 12:17:21.187359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efa3a0 00:33:56.147 [2024-12-05 12:17:21.188381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:15397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.147 [2024-12-05 12:17:21.188396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.195789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef2510 00:33:56.407 [2024-12-05 12:17:21.196826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.196842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.204217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef35f0 00:33:56.407 [2024-12-05 12:17:21.205281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:11680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.205296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.212628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee9168 00:33:56.407 [2024-12-05 12:17:21.213673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.213694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.221071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee73e0 00:33:56.407 [2024-12-05 12:17:21.222092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.222107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.229483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef96f8 00:33:56.407 [2024-12-05 12:17:21.230492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.230508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.237909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efeb58 00:33:56.407 [2024-12-05 12:17:21.238950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:6003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.238965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.246338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeee38 00:33:56.407 [2024-12-05 12:17:21.247385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.247401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.254750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee27f0 00:33:56.407 [2024-12-05 12:17:21.255796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.255812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.263145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee1710 00:33:56.407 [2024-12-05 12:17:21.264181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.264197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.271564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee38d0 00:33:56.407 [2024-12-05 12:17:21.272614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:13297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.272630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.279992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee01f8 00:33:56.407 [2024-12-05 12:17:21.281035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:2952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.281050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.288436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efb8b8 00:33:56.407 [2024-12-05 12:17:21.289498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:9752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.289513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.297911] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeb328 00:33:56.407 [2024-12-05 12:17:21.299407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.299422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.305476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.407 [2024-12-05 12:17:21.306295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.306310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.313794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.407 [2024-12-05 12:17:21.314601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.314616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.322211] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.407 [2024-12-05 12:17:21.323041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.323057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.330682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.407 [2024-12-05 12:17:21.331485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.331501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.339108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.407 [2024-12-05 12:17:21.339921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:20372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.347540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.407 [2024-12-05 12:17:21.348401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.348416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.355951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.407 [2024-12-05 12:17:21.356774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.356790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.364357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.407 [2024-12-05 12:17:21.365151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.365167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.372804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.407 [2024-12-05 12:17:21.373678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.407 [2024-12-05 12:17:21.373694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.407 [2024-12-05 12:17:21.381221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.407 [2024-12-05 12:17:21.382061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.382077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.389645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.408 [2024-12-05 12:17:21.390502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.390517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.398621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.408 [2024-12-05 12:17:21.399751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.399766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.406949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.408 [2024-12-05 12:17:21.408030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.408045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.415372] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.408 [2024-12-05 12:17:21.416476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.416492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.423803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.408 [2024-12-05 12:17:21.424944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.424959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.432249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.408 [2024-12-05 12:17:21.433374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:3578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.433392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.440653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.408 [2024-12-05 12:17:21.441742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.441757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.408 [2024-12-05 12:17:21.449054] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.408 [2024-12-05 12:17:21.450197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.408 [2024-12-05 12:17:21.450212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.457504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.667 [2024-12-05 12:17:21.458632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:9055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.458647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.465937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef9b30 00:33:56.667 [2024-12-05 12:17:21.467065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.467081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.474363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7da8 00:33:56.667 [2024-12-05 12:17:21.475491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:24698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.475506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.482776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.667 [2024-12-05 12:17:21.483859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:1116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.483874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.489716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe2e8 00:33:56.667 [2024-12-05 12:17:21.490392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.490407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.498113] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef92c0 00:33:56.667 [2024-12-05 12:17:21.498747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.498763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.506527] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3a28 00:33:56.667 [2024-12-05 12:17:21.507213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.507228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.514942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef8a50 00:33:56.667 [2024-12-05 12:17:21.515608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.515624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.523361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efcdd0 00:33:56.667 [2024-12-05 12:17:21.524024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.524040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.531749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7538 00:33:56.667 [2024-12-05 12:17:21.532425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:8308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.532441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.540152] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:56.667 [2024-12-05 12:17:21.540828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.667 [2024-12-05 12:17:21.540843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.667 [2024-12-05 12:17:21.548555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee23b8 00:33:56.667 [2024-12-05 12:17:21.549222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.549238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.556981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe2e8 00:33:56.668 [2024-12-05 12:17:21.557620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.557636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.565410] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef92c0 00:33:56.668 [2024-12-05 12:17:21.566043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.566059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.573822] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef3a28 00:33:56.668 [2024-12-05 12:17:21.574500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.574515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.582222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef8a50 00:33:56.668 [2024-12-05 12:17:21.582913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.582929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.590627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efcdd0 00:33:56.668 [2024-12-05 12:17:21.591289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.591305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.599053] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef7538 00:33:56.668 [2024-12-05 12:17:21.599719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.599735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.607502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeea00 00:33:56.668 [2024-12-05 12:17:21.608172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.608188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.615933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee23b8 00:33:56.668 [2024-12-05 12:17:21.616603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.616618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.624342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efe2e8 00:33:56.668 [2024-12-05 12:17:21.625026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.625042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.633925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef92c0 00:33:56.668 [2024-12-05 12:17:21.635028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:11810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.635043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.641780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee01f8 00:33:56.668 [2024-12-05 12:17:21.642582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.642597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.650129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efb8b8 00:33:56.668 [2024-12-05 12:17:21.650861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.650879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.658743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeb328 00:33:56.668 [2024-12-05 12:17:21.659530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:1414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.659545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.667204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eea248 00:33:56.668 [2024-12-05 12:17:21.667979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.667995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.675630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016edf550 00:33:56.668 [2024-12-05 12:17:21.676342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.676357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.684040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eee5c8 00:33:56.668 [2024-12-05 12:17:21.684805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.684820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.692469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef0ff8 00:33:56.668 [2024-12-05 12:17:21.693221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.693236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.700905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efef90 00:33:56.668 [2024-12-05 12:17:21.701683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:55 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.701698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.668 [2024-12-05 12:17:21.709331] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeff18 00:33:56.668 [2024-12-05 12:17:21.710107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.668 [2024-12-05 12:17:21.710122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.927 [2024-12-05 12:17:21.717755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016eeaef0 00:33:56.927 [2024-12-05 12:17:21.718471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.927 [2024-12-05 12:17:21.718487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.927 [2024-12-05 12:17:21.726156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee2c28 00:33:56.927 [2024-12-05 12:17:21.726936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:24249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.927 [2024-12-05 12:17:21.726952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.927 [2024-12-05 12:17:21.734591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ee3060 00:33:56.927 [2024-12-05 12:17:21.735342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.927 [2024-12-05 12:17:21.735358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.927 [2024-12-05 12:17:21.743016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016ef6020 00:33:56.927 [2024-12-05 12:17:21.743738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.927 [2024-12-05 12:17:21.743754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.927 [2024-12-05 12:17:21.751453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f510) with pdu=0x200016efa7d8 00:33:56.927 [2024-12-05 12:17:21.752219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:56.927 [2024-12-05 12:17:21.752235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:56.927 30217.00 IOPS, 118.04 MiB/s 00:33:56.927 Latency(us) 00:33:56.927 [2024-12-05T11:17:21.976Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:56.927 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:56.927 nvme0n1 : 2.00 30222.99 118.06 0.00 0.00 4229.76 2252.80 10485.76 00:33:56.927 [2024-12-05T11:17:21.976Z] =================================================================================================================== 00:33:56.927 [2024-12-05T11:17:21.976Z] Total : 30222.99 118.06 0.00 0.00 4229.76 2252.80 10485.76 00:33:56.927 { 00:33:56.927 "results": [ 00:33:56.927 { 00:33:56.927 "job": "nvme0n1", 00:33:56.927 "core_mask": "0x2", 00:33:56.927 "workload": "randwrite", 00:33:56.927 "status": "finished", 00:33:56.927 "queue_depth": 128, 00:33:56.927 "io_size": 4096, 00:33:56.927 "runtime": 2.003839, 00:33:56.927 "iops": 30222.98697649861, 00:33:56.927 "mibps": 118.0585428769477, 00:33:56.927 "io_failed": 0, 00:33:56.927 "io_timeout": 0, 00:33:56.927 "avg_latency_us": 4229.763529606023, 00:33:56.927 "min_latency_us": 2252.8, 00:33:56.927 "max_latency_us": 10485.76 00:33:56.927 } 00:33:56.927 ], 00:33:56.927 "core_count": 1 00:33:56.927 } 00:33:56.927 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:56.927 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:56.927 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:56.927 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:56.927 | .driver_specific 00:33:56.927 | .nvme_error 00:33:56.928 | .status_code 00:33:56.928 | .command_transient_transport_error' 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 237 > 0 )) 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1540239 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1540239 ']' 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1540239 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:56.928 12:17:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540239 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540239' 00:33:57.188 killing process with pid 1540239 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1540239 00:33:57.188 Received shutdown signal, test time was about 2.000000 seconds 00:33:57.188 00:33:57.188 Latency(us) 00:33:57.188 [2024-12-05T11:17:22.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.188 [2024-12-05T11:17:22.237Z] =================================================================================================================== 00:33:57.188 [2024-12-05T11:17:22.237Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1540239 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1540831 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1540831 /var/tmp/bperf.sock 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1540831 ']' 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:57.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:57.188 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.188 [2024-12-05 12:17:22.185762] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:33:57.188 [2024-12-05 12:17:22.185837] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1540831 ] 00:33:57.188 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:57.188 Zero copy mechanism will not be used. 00:33:57.446 [2024-12-05 12:17:22.246280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:57.446 [2024-12-05 12:17:22.275683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:57.447 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:57.447 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:33:57.447 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.447 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:57.705 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:57.705 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.705 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.705 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.705 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.705 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:57.966 nvme0n1 00:33:57.966 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:57.966 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.966 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:57.966 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.966 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:57.966 12:17:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:57.966 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:57.966 Zero copy mechanism will not be used. 00:33:57.966 Running I/O for 2 seconds... 00:33:57.966 [2024-12-05 12:17:22.910416] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.910650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.910674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.916995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.917049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.917068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.920660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.920753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.920769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.925554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.925625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.925642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.934349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.934634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.934651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.940097] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.940404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.940420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.947434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.947656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.947672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.954092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.954379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.954396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.960820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.961112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.961129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.969566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.969794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.969810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.979008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.979051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.979067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.983337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.983666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.983682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.991828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.991916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.991935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:22.996255] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:22.996316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:22.996332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:23.007364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:23.007416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.966 [2024-12-05 12:17:23.007432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:57.966 [2024-12-05 12:17:23.012786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:57.966 [2024-12-05 12:17:23.013083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.967 [2024-12-05 12:17:23.013100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.228 [2024-12-05 12:17:23.023103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.228 [2024-12-05 12:17:23.023386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.228 [2024-12-05 12:17:23.023403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.228 [2024-12-05 12:17:23.034032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.228 [2024-12-05 12:17:23.034315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.228 [2024-12-05 12:17:23.034332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.228 [2024-12-05 12:17:23.045222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.228 [2024-12-05 12:17:23.045472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.228 [2024-12-05 12:17:23.045488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.228 [2024-12-05 12:17:23.056617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.228 [2024-12-05 12:17:23.056889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.228 [2024-12-05 12:17:23.056904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.228 [2024-12-05 12:17:23.067941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.068149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.068165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.080005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.080262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.080278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.091966] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.092214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.092230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.102968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.103214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.103229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.114722] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.114958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.114973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.125622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.125880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.125903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.137312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.137596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.137612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.149093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.149360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.149374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.160708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.161017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.161032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.171431] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.171504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.171519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.182829] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.183082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.183097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.193549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.193799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.193815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.201660] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.201925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.201940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.206088] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.206133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.206148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.210075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.210120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.210135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.216750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.216808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.216824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.222604] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.222940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.222956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.229576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.229636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.229651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.235341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.235392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.235413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.243570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.243868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.243884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.251441] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.251525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.229 [2024-12-05 12:17:23.251541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.229 [2024-12-05 12:17:23.259314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.229 [2024-12-05 12:17:23.259365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.230 [2024-12-05 12:17:23.259380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.230 [2024-12-05 12:17:23.266250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.230 [2024-12-05 12:17:23.266295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.230 [2024-12-05 12:17:23.266311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.230 [2024-12-05 12:17:23.271486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.230 [2024-12-05 12:17:23.271539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.230 [2024-12-05 12:17:23.271554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.280699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.280779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.280794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.287843] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.287905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.287920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.292150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.292197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.292212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.296188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.296252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.296267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.300301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.300349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.300365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.306847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.306907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.306923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.313472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.313536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.313551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.320379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.320680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.320697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.328530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.328831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.328847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.338534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.338855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.338872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.349649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.349867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.349883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.360313] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.360518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.360534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.370335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.370557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.370573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.380082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.380305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.380321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.391250] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.391589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.391606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.401666] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.401971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.401988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.407314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.407506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.407522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.411646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.411837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.411852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.419683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.419998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.420016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.427084] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.427377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.427394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.430608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.430666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.430683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.434491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.434701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.434717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.438217] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.438405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.438421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.442389] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.442582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.442597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.450185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.450386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.450402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.456253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.456487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.456503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.463776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.464089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.464105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.469786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.470081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.470098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.474062] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.474373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.474389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.481622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.481927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.481944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.490108] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.492 [2024-12-05 12:17:23.490310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.492 [2024-12-05 12:17:23.490326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.492 [2024-12-05 12:17:23.499798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.493 [2024-12-05 12:17:23.500118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.493 [2024-12-05 12:17:23.500135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.493 [2024-12-05 12:17:23.509963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.493 [2024-12-05 12:17:23.510191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.493 [2024-12-05 12:17:23.510207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.493 [2024-12-05 12:17:23.521260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.493 [2024-12-05 12:17:23.521514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.493 [2024-12-05 12:17:23.521530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.493 [2024-12-05 12:17:23.531609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.493 [2024-12-05 12:17:23.531869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.493 [2024-12-05 12:17:23.531886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.541771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.542047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.542064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.552402] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.552658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.552674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.563877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.564106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.564122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.574550] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.574834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.574851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.584352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.584535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.584552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.587701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.587873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.587889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.590982] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.591207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.591223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.595740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.595903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.595920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.598771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.598934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.598950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.601785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.601944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.601960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.605199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.605361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.605376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.612227] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.612589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.612610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.618262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.618580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.618597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.624184] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.624232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.755 [2024-12-05 12:17:23.624248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.755 [2024-12-05 12:17:23.632526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.755 [2024-12-05 12:17:23.632605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.632620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.636018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.636063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.636078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.639481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.639527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.639542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.642783] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.642828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.642844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.646297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.646344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.646359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.649616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.649658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.649673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.652834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.652885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.652901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.656906] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.656959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.656974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.660142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.660208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.660224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.663559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.663602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.663617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.666475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.666523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.666538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.669310] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.669358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.669373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.672145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.672194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.672210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.674980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.675025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.675041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.677828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.677870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.677885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.680649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.680695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.680710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.683395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.683444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.683465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.686230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.686288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.686303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.689055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.689106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.689121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.692429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.692513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.692528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.756 [2024-12-05 12:17:23.699275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.756 [2024-12-05 12:17:23.699416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.756 [2024-12-05 12:17:23.699432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.709141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.709484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.709499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.718526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.718595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.718610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.723463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.723537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.723555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.727992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.728314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.728330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.731179] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.731232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.731247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.735581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.735632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.735647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.738752] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.738827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.738842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.743440] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.743521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.743537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.746362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.746405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.746420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.749307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.749366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.749381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.752298] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.752358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.752373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.755139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.755195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.755210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.757926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.757995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.758010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.760934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.760978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.760993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.765262] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.765325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.765340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.768937] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.769138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.769152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.779445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.779730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.779747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.788621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.788750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.788764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.794203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.794248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.794263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:58.757 [2024-12-05 12:17:23.798504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:58.757 [2024-12-05 12:17:23.798548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.757 [2024-12-05 12:17:23.798563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.803920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.803996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.804012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.811089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.811172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.811187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.816434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.816615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.816631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.822388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.822516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.822531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.826580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.826643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.826658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.830288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.830368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.830384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.834346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.834516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.834531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.838329] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.838395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.838411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.847641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.847828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.847846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.853009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.853090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.853105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.856730] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.856776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.856791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.859766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.020 [2024-12-05 12:17:23.859833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.020 [2024-12-05 12:17:23.859849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.020 [2024-12-05 12:17:23.862900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.862954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.862969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.866326] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.866370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.866385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.869411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.869467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.869482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.872237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.872295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.872310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.875036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.875082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.875097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.877912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.877964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.877980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.880719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.880771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.880785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.883354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.883404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.883419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.886039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.886095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.886110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.888631] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.888684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.888698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.891200] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.891316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.891331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.894818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.894911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.894926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.900603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.900821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.900836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.904866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.904927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.904942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.907436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.907510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.907526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.021 4969.00 IOPS, 621.12 MiB/s [2024-12-05T11:17:24.070Z] [2024-12-05 12:17:23.914500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.914562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.914577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.918324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.918427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.918442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.926709] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.927018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.927033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.936165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.936430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.936446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.943900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.943972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.943988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.021 [2024-12-05 12:17:23.947032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.021 [2024-12-05 12:17:23.947102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.021 [2024-12-05 12:17:23.947118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.949896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.949967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.949981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.952747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.952802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.952820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.955549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.955620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.955636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.958317] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.958372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.958387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.961121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.961181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.961196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.963740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.963801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.963816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.967877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.967955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.967970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.974099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.974168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.974183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.977240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.977295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.977310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.982719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.983029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.983045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.987031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.987120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.987135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.989874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.989929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.989944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.992744] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.992855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.992870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.996180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.996269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.996284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:23.998868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:23.998929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:23.998944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:24.001683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:24.001738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:24.001754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:24.006229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:24.006284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:24.006299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:24.010439] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:24.010501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:24.010517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:24.013309] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:24.013364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:24.013380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:24.015958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.022 [2024-12-05 12:17:24.016028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.022 [2024-12-05 12:17:24.016043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.022 [2024-12-05 12:17:24.018556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.018614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.018629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.023 [2024-12-05 12:17:24.021168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.021225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.021240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.023 [2024-12-05 12:17:24.025678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.025791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.025806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.023 [2024-12-05 12:17:24.033103] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.033326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.033342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.023 [2024-12-05 12:17:24.043020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.043263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.043279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.023 [2024-12-05 12:17:24.052357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.052435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.023 [2024-12-05 12:17:24.062590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.023 [2024-12-05 12:17:24.062868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.023 [2024-12-05 12:17:24.062884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.072523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.072821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.072841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.082289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.082580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.082596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.092214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.092448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.092468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.101421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.101482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.101498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.104376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.104421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.104435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.107315] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.107365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.107380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.110197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.110270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.110284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.112874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.112919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.112935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.115598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.115642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.115657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.118443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.118497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.118511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.121111] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.121156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.121171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.123703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.123761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.123776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.126272] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.126316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.126331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.128837] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.128889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.128904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.131473] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.131526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.131541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.134873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.134955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.134970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.142884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.143166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.143182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.152101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.152419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.152435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.162671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.162927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.162943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.172395] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.172469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.172484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.286 [2024-12-05 12:17:24.183150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.286 [2024-12-05 12:17:24.183381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.286 [2024-12-05 12:17:24.183396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.192699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.192755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.192770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.196359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.196402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.196417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.202518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.202694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.202710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.210167] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.210465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.210481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.220480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.220549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.220563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.231041] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.231338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.231356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.241749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.242020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.242036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.251995] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.252253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.252268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.262514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.262664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.262679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.273429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.273713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.273729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.284268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.284438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.284458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.294639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.294890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.294905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.305523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.305762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.305778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.316139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.316212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.316227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.287 [2024-12-05 12:17:24.326817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.287 [2024-12-05 12:17:24.327096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.287 [2024-12-05 12:17:24.327112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.337187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.337431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.337446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.347617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.347828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.347842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.358074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.358321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.358336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.368976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.369072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.369087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.379422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.379691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.388662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.388906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.388921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.398139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.398384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.398400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.407956] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.408195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.408210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.418498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.418754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.418769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.429342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.429414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.429430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.437978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.438044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.438060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.446707] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.446943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.446959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.454813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.455063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.455079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.463621] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.463681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.463697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.471587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.471642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.471657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.476680] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.550 [2024-12-05 12:17:24.476722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.550 [2024-12-05 12:17:24.476738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.550 [2024-12-05 12:17:24.483461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.483741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.483759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.490180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.490238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.490253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.493598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.493643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.493658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.497074] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.497153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.497168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.500510] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.500554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.500569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.504107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.504148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.504162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.511301] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.511597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.511619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.517708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.517769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.517784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.524910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.525180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.525195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.532090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.532251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.532266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.540382] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.540448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.540469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.548159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.548214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.548229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.553703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.554026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.554042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.559924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.559986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.560002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.564429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.564712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.564728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.571235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.571343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.571358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.576241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.576313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.576328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.579704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.579764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.579779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.582733] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.582778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.582793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.585505] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.585583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.585599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.551 [2024-12-05 12:17:24.588222] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.551 [2024-12-05 12:17:24.588294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.551 [2024-12-05 12:17:24.588309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.552 [2024-12-05 12:17:24.590936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.552 [2024-12-05 12:17:24.590990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.552 [2024-12-05 12:17:24.591005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.552 [2024-12-05 12:17:24.593831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.552 [2024-12-05 12:17:24.593884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.552 [2024-12-05 12:17:24.593900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.552 [2024-12-05 12:17:24.596561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.552 [2024-12-05 12:17:24.596675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.552 [2024-12-05 12:17:24.596690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.599187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.599238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.599254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.602428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.602486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.602502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.605436] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.605507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.605524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.607993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.608039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.608054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.611354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.611770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.611786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.618070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.618134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.618149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.620850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.620895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.620910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.623681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.623770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.623785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.626944] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.627004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.627019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.629507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.629573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.629588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.634685] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.634949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.634965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.641330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.641459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.641474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.650279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.650559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.650575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.658872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.659130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.838 [2024-12-05 12:17:24.659146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.838 [2024-12-05 12:17:24.666516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.838 [2024-12-05 12:17:24.666598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.666613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.675854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.675912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.675927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.683555] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.683614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.683629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.692475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.692662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.692677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.701681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.701743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.701758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.709619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.709869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.709884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.715048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.715102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.715117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.718553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.718596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.718611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.721847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.721926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.721942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.725484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.725531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.725546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.731020] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.731201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.731217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.739883] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.740166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.740182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.744251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.744314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.744329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.747787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.747842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.747858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.753943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.754074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.754091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.761699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.761994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.762010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.766126] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.766171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.766186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.772579] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.772853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.772869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.781187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.781243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.781259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.788519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.788786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.788801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.795327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.795372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.795387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.839 [2024-12-05 12:17:24.798948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.839 [2024-12-05 12:17:24.799018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.839 [2024-12-05 12:17:24.799034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.803681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.803877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.803893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.809932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.810013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.810028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.816595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.816842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.816856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.825056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.825357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.825373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.829312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.829514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.829529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.838762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.838837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.838852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.848149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.848360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.848376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.852597] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.852664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.852679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.858692] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.858893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.858908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.865706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.865839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.865854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.873626] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.873856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.873872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.881056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.881348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.881364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:59.840 [2024-12-05 12:17:24.885144] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:33:59.840 [2024-12-05 12:17:24.885239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:59.840 [2024-12-05 12:17:24.885254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:00.101 [2024-12-05 12:17:24.892379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:34:00.101 [2024-12-05 12:17:24.892621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.101 [2024-12-05 12:17:24.892637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:00.101 [2024-12-05 12:17:24.900387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:34:00.101 [2024-12-05 12:17:24.900519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.101 [2024-12-05 12:17:24.900535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:00.101 [2024-12-05 12:17:24.909396] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x105f9f0) with pdu=0x200016eff3c8 00:34:00.101 [2024-12-05 12:17:24.909574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:00.101 [2024-12-05 12:17:24.909589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:00.101 4977.00 IOPS, 622.12 MiB/s 00:34:00.102 Latency(us) 00:34:00.102 [2024-12-05T11:17:25.151Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.102 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:34:00.102 nvme0n1 : 2.01 4971.00 621.38 0.00 0.00 3212.42 1208.32 11960.32 00:34:00.102 [2024-12-05T11:17:25.151Z] =================================================================================================================== 00:34:00.102 [2024-12-05T11:17:25.151Z] Total : 4971.00 621.38 0.00 0.00 3212.42 1208.32 11960.32 00:34:00.102 { 00:34:00.102 "results": [ 00:34:00.102 { 00:34:00.102 "job": "nvme0n1", 00:34:00.102 "core_mask": "0x2", 00:34:00.102 "workload": "randwrite", 00:34:00.102 "status": "finished", 00:34:00.102 "queue_depth": 16, 00:34:00.102 "io_size": 131072, 00:34:00.102 "runtime": 2.005632, 00:34:00.102 "iops": 4971.001659327334, 00:34:00.102 "mibps": 621.3752074159167, 00:34:00.102 "io_failed": 0, 00:34:00.102 "io_timeout": 0, 00:34:00.102 "avg_latency_us": 3212.4201511200267, 00:34:00.102 "min_latency_us": 1208.32, 00:34:00.102 "max_latency_us": 11960.32 00:34:00.102 } 00:34:00.102 ], 00:34:00.102 "core_count": 1 00:34:00.102 } 00:34:00.102 12:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:34:00.102 12:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:34:00.102 12:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:34:00.102 | .driver_specific 00:34:00.102 | .nvme_error 00:34:00.102 | .status_code 00:34:00.102 | .command_transient_transport_error' 00:34:00.102 12:17:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 322 > 0 )) 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1540831 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1540831 ']' 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1540831 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.102 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1540831 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1540831' 00:34:00.362 killing process with pid 1540831 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1540831 00:34:00.362 Received shutdown signal, test time was about 2.000000 seconds 00:34:00.362 00:34:00.362 Latency(us) 00:34:00.362 [2024-12-05T11:17:25.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:00.362 [2024-12-05T11:17:25.411Z] =================================================================================================================== 00:34:00.362 [2024-12-05T11:17:25.411Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1540831 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1538525 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1538525 ']' 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1538525 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1538525 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1538525' 00:34:00.362 killing process with pid 1538525 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1538525 00:34:00.362 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1538525 00:34:00.622 00:34:00.622 real 0m14.645s 00:34:00.622 user 0m28.638s 00:34:00.622 sys 0m3.518s 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:34:00.622 ************************************ 00:34:00.622 END TEST nvmf_digest_error 00:34:00.622 ************************************ 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:00.622 rmmod nvme_tcp 00:34:00.622 rmmod nvme_fabrics 00:34:00.622 rmmod nvme_keyring 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 1538525 ']' 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 1538525 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1538525 ']' 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1538525 00:34:00.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1538525) - No such process 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1538525 is not found' 00:34:00.622 Process with pid 1538525 is not found 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:00.622 12:17:25 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:03.164 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:03.164 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:03.164 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:34:03.164 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:03.164 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:34:03.165 00:34:03.165 real 0m41.105s 00:34:03.165 user 1m3.064s 00:34:03.165 sys 0m13.021s 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:03.165 ************************************ 00:34:03.165 END TEST nvmf_digest 00:34:03.165 ************************************ 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:03.165 ************************************ 00:34:03.165 START TEST nvmf_bdevperf 00:34:03.165 ************************************ 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:03.165 * Looking for test storage... 00:34:03.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.165 --rc genhtml_branch_coverage=1 00:34:03.165 --rc genhtml_function_coverage=1 00:34:03.165 --rc genhtml_legend=1 00:34:03.165 --rc geninfo_all_blocks=1 00:34:03.165 --rc geninfo_unexecuted_blocks=1 00:34:03.165 00:34:03.165 ' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.165 --rc genhtml_branch_coverage=1 00:34:03.165 --rc genhtml_function_coverage=1 00:34:03.165 --rc genhtml_legend=1 00:34:03.165 --rc geninfo_all_blocks=1 00:34:03.165 --rc geninfo_unexecuted_blocks=1 00:34:03.165 00:34:03.165 ' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.165 --rc genhtml_branch_coverage=1 00:34:03.165 --rc genhtml_function_coverage=1 00:34:03.165 --rc genhtml_legend=1 00:34:03.165 --rc geninfo_all_blocks=1 00:34:03.165 --rc geninfo_unexecuted_blocks=1 00:34:03.165 00:34:03.165 ' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:03.165 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:03.165 --rc genhtml_branch_coverage=1 00:34:03.165 --rc genhtml_function_coverage=1 00:34:03.165 --rc genhtml_legend=1 00:34:03.165 --rc geninfo_all_blocks=1 00:34:03.165 --rc geninfo_unexecuted_blocks=1 00:34:03.165 00:34:03.165 ' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:03.165 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:03.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:34:03.166 12:17:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:11.306 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:11.306 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:11.306 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:11.306 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:11.307 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@247 -- # create_target_ns 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:11.307 10.0.0.1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:11.307 10.0.0.2 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.307 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:11.308 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:11.308 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.693 ms 00:34:11.308 00:34:11.308 --- 10.0.0.1 ping statistics --- 00:34:11.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.308 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:11.308 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:11.308 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:34:11.308 00:34:11.308 --- 10.0.0.2 ping statistics --- 00:34:11.308 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:11.308 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:34:11.308 ' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=1545634 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 1545634 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1545634 ']' 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:11.308 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.309 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:11.309 12:17:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.309 [2024-12-05 12:17:35.605843] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:11.309 [2024-12-05 12:17:35.605903] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.309 [2024-12-05 12:17:35.703907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:11.309 [2024-12-05 12:17:35.755995] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.309 [2024-12-05 12:17:35.756044] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.309 [2024-12-05 12:17:35.756053] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.309 [2024-12-05 12:17:35.756060] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.309 [2024-12-05 12:17:35.756066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.309 [2024-12-05 12:17:35.757941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.309 [2024-12-05 12:17:35.758109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.309 [2024-12-05 12:17:35.758110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.569 [2024-12-05 12:17:36.486508] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.569 Malloc0 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.569 [2024-12-05 12:17:36.564824] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:11.569 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:11.569 { 00:34:11.570 "params": { 00:34:11.570 "name": "Nvme$subsystem", 00:34:11.570 "trtype": "$TEST_TRANSPORT", 00:34:11.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:11.570 "adrfam": "ipv4", 00:34:11.570 "trsvcid": "$NVMF_PORT", 00:34:11.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:11.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:11.570 "hdgst": ${hdgst:-false}, 00:34:11.570 "ddgst": ${ddgst:-false} 00:34:11.570 }, 00:34:11.570 "method": "bdev_nvme_attach_controller" 00:34:11.570 } 00:34:11.570 EOF 00:34:11.570 )") 00:34:11.570 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:34:11.570 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:34:11.570 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:34:11.570 12:17:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:11.570 "params": { 00:34:11.570 "name": "Nvme1", 00:34:11.570 "trtype": "tcp", 00:34:11.570 "traddr": "10.0.0.2", 00:34:11.570 "adrfam": "ipv4", 00:34:11.570 "trsvcid": "4420", 00:34:11.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:11.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:11.570 "hdgst": false, 00:34:11.570 "ddgst": false 00:34:11.570 }, 00:34:11.570 "method": "bdev_nvme_attach_controller" 00:34:11.570 }' 00:34:11.830 [2024-12-05 12:17:36.635435] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:11.830 [2024-12-05 12:17:36.635518] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1545921 ] 00:34:11.830 [2024-12-05 12:17:36.727198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:11.830 [2024-12-05 12:17:36.779656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.090 Running I/O for 1 seconds... 00:34:13.030 8594.00 IOPS, 33.57 MiB/s 00:34:13.030 Latency(us) 00:34:13.030 [2024-12-05T11:17:38.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:13.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:13.030 Verification LBA range: start 0x0 length 0x4000 00:34:13.030 Nvme1n1 : 1.01 8616.60 33.66 0.00 0.00 14789.27 3345.07 13271.04 00:34:13.030 [2024-12-05T11:17:38.079Z] =================================================================================================================== 00:34:13.030 [2024-12-05T11:17:38.079Z] Total : 8616.60 33.66 0.00 0.00 14789.27 3345.07 13271.04 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1546165 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:13.289 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:13.289 { 00:34:13.289 "params": { 00:34:13.289 "name": "Nvme$subsystem", 00:34:13.289 "trtype": "$TEST_TRANSPORT", 00:34:13.289 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:13.289 "adrfam": "ipv4", 00:34:13.289 "trsvcid": "$NVMF_PORT", 00:34:13.289 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:13.289 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:13.290 "hdgst": ${hdgst:-false}, 00:34:13.290 "ddgst": ${ddgst:-false} 00:34:13.290 }, 00:34:13.290 "method": "bdev_nvme_attach_controller" 00:34:13.290 } 00:34:13.290 EOF 00:34:13.290 )") 00:34:13.290 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:34:13.290 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:34:13.290 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:34:13.290 12:17:38 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:13.290 "params": { 00:34:13.290 "name": "Nvme1", 00:34:13.290 "trtype": "tcp", 00:34:13.290 "traddr": "10.0.0.2", 00:34:13.290 "adrfam": "ipv4", 00:34:13.290 "trsvcid": "4420", 00:34:13.290 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:13.290 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:13.290 "hdgst": false, 00:34:13.290 "ddgst": false 00:34:13.290 }, 00:34:13.290 "method": "bdev_nvme_attach_controller" 00:34:13.290 }' 00:34:13.290 [2024-12-05 12:17:38.173421] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:13.290 [2024-12-05 12:17:38.173483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1546165 ] 00:34:13.290 [2024-12-05 12:17:38.260988] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:13.290 [2024-12-05 12:17:38.296669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:13.549 Running I/O for 15 seconds... 00:34:15.870 10621.00 IOPS, 41.49 MiB/s [2024-12-05T11:17:41.182Z] 10887.00 IOPS, 42.53 MiB/s [2024-12-05T11:17:41.182Z] 12:17:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1545634 00:34:16.133 12:17:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:16.133 [2024-12-05 12:17:41.136448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:93120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:93144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:93152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:93168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:93192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:93216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:93224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:93232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:93304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:93328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.136988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.136997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.137005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.137014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.133 [2024-12-05 12:17:41.137021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.133 [2024-12-05 12:17:41.137030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:93352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:93368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:93392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:93408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:93424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:93464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:93472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:93512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:93520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:93528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:93544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:93608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:93640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:93648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.134 [2024-12-05 12:17:41.137784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.134 [2024-12-05 12:17:41.137794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:93672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.135 [2024-12-05 12:17:41.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:92680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:92736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:92744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.137988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.137998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:92784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:16.135 [2024-12-05 12:17:41.138089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:92792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:92800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:92816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:92832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:92840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:92848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:92872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:92880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:92896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:92904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:92928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:92936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:92952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.135 [2024-12-05 12:17:41.138467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.135 [2024-12-05 12:17:41.138477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:92968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:92976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:92992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:93016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:93024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:93064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:93088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:16.136 [2024-12-05 12:17:41.138775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.138784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2170610 is same with the state(6) to be set 00:34:16.136 [2024-12-05 12:17:41.138794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:16.136 [2024-12-05 12:17:41.138800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:16.136 [2024-12-05 12:17:41.138806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93112 len:8 PRP1 0x0 PRP2 0x0 00:34:16.136 [2024-12-05 12:17:41.138815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.136 [2024-12-05 12:17:41.142418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.136 [2024-12-05 12:17:41.142473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.136 [2024-12-05 12:17:41.143159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.136 [2024-12-05 12:17:41.143176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.136 [2024-12-05 12:17:41.143184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.136 [2024-12-05 12:17:41.143407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.136 [2024-12-05 12:17:41.143638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.136 [2024-12-05 12:17:41.143647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.136 [2024-12-05 12:17:41.143657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.136 [2024-12-05 12:17:41.143665] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.136 [2024-12-05 12:17:41.156522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.136 [2024-12-05 12:17:41.157180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.136 [2024-12-05 12:17:41.157218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.136 [2024-12-05 12:17:41.157229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.136 [2024-12-05 12:17:41.157481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.136 [2024-12-05 12:17:41.157708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.136 [2024-12-05 12:17:41.157717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.136 [2024-12-05 12:17:41.157726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.136 [2024-12-05 12:17:41.157735] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.136 [2024-12-05 12:17:41.170396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.136 [2024-12-05 12:17:41.171078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.136 [2024-12-05 12:17:41.171117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.136 [2024-12-05 12:17:41.171128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.136 [2024-12-05 12:17:41.171369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.136 [2024-12-05 12:17:41.171611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.136 [2024-12-05 12:17:41.171621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.136 [2024-12-05 12:17:41.171629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.136 [2024-12-05 12:17:41.171637] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.399 [2024-12-05 12:17:41.184309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.399 [2024-12-05 12:17:41.184972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-12-05 12:17:41.185012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.399 [2024-12-05 12:17:41.185024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.399 [2024-12-05 12:17:41.185266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.399 [2024-12-05 12:17:41.185502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.399 [2024-12-05 12:17:41.185513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.399 [2024-12-05 12:17:41.185521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.399 [2024-12-05 12:17:41.185529] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.399 [2024-12-05 12:17:41.198239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.399 [2024-12-05 12:17:41.198771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-12-05 12:17:41.198810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.399 [2024-12-05 12:17:41.198822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.399 [2024-12-05 12:17:41.199064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.399 [2024-12-05 12:17:41.199289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.399 [2024-12-05 12:17:41.199298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.399 [2024-12-05 12:17:41.199307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.399 [2024-12-05 12:17:41.199315] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.399 [2024-12-05 12:17:41.212188] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.399 [2024-12-05 12:17:41.212793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-12-05 12:17:41.212834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.399 [2024-12-05 12:17:41.212847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.399 [2024-12-05 12:17:41.213091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.399 [2024-12-05 12:17:41.213316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.399 [2024-12-05 12:17:41.213326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.399 [2024-12-05 12:17:41.213339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.399 [2024-12-05 12:17:41.213347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.399 [2024-12-05 12:17:41.226221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.399 [2024-12-05 12:17:41.226931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-12-05 12:17:41.226972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.399 [2024-12-05 12:17:41.226984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.399 [2024-12-05 12:17:41.227227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.399 [2024-12-05 12:17:41.227452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.399 [2024-12-05 12:17:41.227471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.399 [2024-12-05 12:17:41.227480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.399 [2024-12-05 12:17:41.227488] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.399 [2024-12-05 12:17:41.240153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.399 [2024-12-05 12:17:41.240819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.399 [2024-12-05 12:17:41.240863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.399 [2024-12-05 12:17:41.240876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.399 [2024-12-05 12:17:41.241121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.399 [2024-12-05 12:17:41.241347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.241358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.241366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.241374] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.254040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.254778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.254824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.254835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.255081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.255307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.255316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.255324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.255333] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.268039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.268736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.268784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.268796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.269043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.269269] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.269279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.269287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.269296] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.282013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.282735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.282784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.282795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.283044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.283271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.283281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.283289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.283298] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.295979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.296592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.296620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.296628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.296853] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.297076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.297085] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.297093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.297101] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.310007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.310614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.310639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.310655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.310878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.311101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.311110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.311118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.311125] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.324042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.324733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.324795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.324808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.325066] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.325296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.325307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.325315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.325324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.338066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.338763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.338826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.338838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.339096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.339325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.339335] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.339343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.400 [2024-12-05 12:17:41.339353] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.400 [2024-12-05 12:17:41.352133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.400 [2024-12-05 12:17:41.352868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.400 [2024-12-05 12:17:41.352931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.400 [2024-12-05 12:17:41.352944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.400 [2024-12-05 12:17:41.353202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.400 [2024-12-05 12:17:41.353438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.400 [2024-12-05 12:17:41.353449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.400 [2024-12-05 12:17:41.353473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.353482] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.401 [2024-12-05 12:17:41.366020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.401 [2024-12-05 12:17:41.366793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-12-05 12:17:41.366854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.401 [2024-12-05 12:17:41.366867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.401 [2024-12-05 12:17:41.367125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.401 [2024-12-05 12:17:41.367355] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.401 [2024-12-05 12:17:41.367365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.401 [2024-12-05 12:17:41.367375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.367384] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.401 [2024-12-05 12:17:41.380028] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.401 [2024-12-05 12:17:41.380767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-12-05 12:17:41.380829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.401 [2024-12-05 12:17:41.380842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.401 [2024-12-05 12:17:41.381100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.401 [2024-12-05 12:17:41.381329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.401 [2024-12-05 12:17:41.381341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.401 [2024-12-05 12:17:41.381349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.381359] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.401 [2024-12-05 12:17:41.394074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.401 [2024-12-05 12:17:41.394668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-12-05 12:17:41.394731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.401 [2024-12-05 12:17:41.394747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.401 [2024-12-05 12:17:41.395006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.401 [2024-12-05 12:17:41.395236] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.401 [2024-12-05 12:17:41.395247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.401 [2024-12-05 12:17:41.395262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.395273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.401 [2024-12-05 12:17:41.408003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.401 [2024-12-05 12:17:41.408544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-12-05 12:17:41.408574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.401 [2024-12-05 12:17:41.408583] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.401 [2024-12-05 12:17:41.408809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.401 [2024-12-05 12:17:41.409032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.401 [2024-12-05 12:17:41.409043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.401 [2024-12-05 12:17:41.409050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.409058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.401 [2024-12-05 12:17:41.422026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.401 [2024-12-05 12:17:41.422605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-12-05 12:17:41.422633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.401 [2024-12-05 12:17:41.422642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.401 [2024-12-05 12:17:41.422868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.401 [2024-12-05 12:17:41.423091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.401 [2024-12-05 12:17:41.423102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.401 [2024-12-05 12:17:41.423113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.423122] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.401 [2024-12-05 12:17:41.436076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.401 [2024-12-05 12:17:41.436656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.401 [2024-12-05 12:17:41.436681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.401 [2024-12-05 12:17:41.436690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.401 [2024-12-05 12:17:41.436913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.401 [2024-12-05 12:17:41.437137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.401 [2024-12-05 12:17:41.437149] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.401 [2024-12-05 12:17:41.437158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.401 [2024-12-05 12:17:41.437166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.450022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.664 [2024-12-05 12:17:41.450742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.664 [2024-12-05 12:17:41.450805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.664 [2024-12-05 12:17:41.450820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.664 [2024-12-05 12:17:41.451078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.664 [2024-12-05 12:17:41.451308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.664 [2024-12-05 12:17:41.451319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.664 [2024-12-05 12:17:41.451327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.664 [2024-12-05 12:17:41.451337] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.464046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.664 [2024-12-05 12:17:41.464556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.664 [2024-12-05 12:17:41.464585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.664 [2024-12-05 12:17:41.464594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.664 [2024-12-05 12:17:41.464818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.664 [2024-12-05 12:17:41.465055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.664 [2024-12-05 12:17:41.465065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.664 [2024-12-05 12:17:41.465073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.664 [2024-12-05 12:17:41.465081] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.477983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.664 [2024-12-05 12:17:41.478574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.664 [2024-12-05 12:17:41.478601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.664 [2024-12-05 12:17:41.478610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.664 [2024-12-05 12:17:41.478833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.664 [2024-12-05 12:17:41.479056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.664 [2024-12-05 12:17:41.479066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.664 [2024-12-05 12:17:41.479073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.664 [2024-12-05 12:17:41.479080] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.492042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.664 [2024-12-05 12:17:41.492750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.664 [2024-12-05 12:17:41.492811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.664 [2024-12-05 12:17:41.492838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.664 [2024-12-05 12:17:41.493096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.664 [2024-12-05 12:17:41.493324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.664 [2024-12-05 12:17:41.493336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.664 [2024-12-05 12:17:41.493345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.664 [2024-12-05 12:17:41.493354] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.505901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.664 [2024-12-05 12:17:41.506578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.664 [2024-12-05 12:17:41.506651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.664 [2024-12-05 12:17:41.506666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.664 [2024-12-05 12:17:41.506923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.664 [2024-12-05 12:17:41.507151] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.664 [2024-12-05 12:17:41.507163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.664 [2024-12-05 12:17:41.507171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.664 [2024-12-05 12:17:41.507180] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.519934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.664 [2024-12-05 12:17:41.520591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.664 [2024-12-05 12:17:41.520654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.664 [2024-12-05 12:17:41.520668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.664 [2024-12-05 12:17:41.520927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.664 [2024-12-05 12:17:41.521156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.664 [2024-12-05 12:17:41.521166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.664 [2024-12-05 12:17:41.521174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.664 [2024-12-05 12:17:41.521183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.664 [2024-12-05 12:17:41.533884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.534482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.534512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.534521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.534747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.534979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.534988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.534996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.535003] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.547910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.548569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.548631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.548645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.548903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.549132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.549144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.549152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.549161] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.561874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.562508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.562537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.562546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.562771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.562994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.563003] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.563012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.563019] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 9322.33 IOPS, 36.42 MiB/s [2024-12-05T11:17:41.714Z] [2024-12-05 12:17:41.577422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.578137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.578199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.578212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.578498] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.578729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.578739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.578755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.578764] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.591464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.592078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.592107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.592116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.592340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.592575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.592586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.592594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.592601] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.605501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.606069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.606094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.606102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.606325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.606558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.606569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.606577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.606585] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.619484] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.620145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.620206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.620219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.620487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.620718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.620728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.620737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.620746] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.633712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.634371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.634399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.634408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.634641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.634867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.634877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.634885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.634893] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.647580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.648290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.648352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.648366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.648638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.648870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.648880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.648888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.648899] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.665 [2024-12-05 12:17:41.661603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.665 [2024-12-05 12:17:41.662270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.665 [2024-12-05 12:17:41.662332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.665 [2024-12-05 12:17:41.662345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.665 [2024-12-05 12:17:41.662615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.665 [2024-12-05 12:17:41.662846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.665 [2024-12-05 12:17:41.662856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.665 [2024-12-05 12:17:41.662864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.665 [2024-12-05 12:17:41.662874] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.666 [2024-12-05 12:17:41.675583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.666 [2024-12-05 12:17:41.676178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.666 [2024-12-05 12:17:41.676215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.666 [2024-12-05 12:17:41.676224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.666 [2024-12-05 12:17:41.676450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.666 [2024-12-05 12:17:41.676690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.666 [2024-12-05 12:17:41.676700] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.666 [2024-12-05 12:17:41.676707] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.666 [2024-12-05 12:17:41.676715] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.666 [2024-12-05 12:17:41.689641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.666 [2024-12-05 12:17:41.690302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.666 [2024-12-05 12:17:41.690363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.666 [2024-12-05 12:17:41.690376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.666 [2024-12-05 12:17:41.690643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.666 [2024-12-05 12:17:41.690873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.666 [2024-12-05 12:17:41.690883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.666 [2024-12-05 12:17:41.690892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.666 [2024-12-05 12:17:41.690901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.666 [2024-12-05 12:17:41.703591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.666 [2024-12-05 12:17:41.704208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.666 [2024-12-05 12:17:41.704237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.666 [2024-12-05 12:17:41.704246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.666 [2024-12-05 12:17:41.704482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.666 [2024-12-05 12:17:41.704708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.666 [2024-12-05 12:17:41.704717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.666 [2024-12-05 12:17:41.704724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.666 [2024-12-05 12:17:41.704732] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.717661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.718360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.718422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.718435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.718714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.718945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.718954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.718963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.718972] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.731711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.732396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.732471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.732485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.732742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.732972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.732982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.732991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.733000] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.745734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.746326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.746356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.746364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.746599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.746824] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.746834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.746842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.746849] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.759789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.760448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.760523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.760536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.760794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.761022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.761032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.761049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.761058] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.773815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.774452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.774490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.774500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.774726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.774949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.774958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.774966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.774974] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.787727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.788296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.788320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.788329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.788563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.788788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.788798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.788805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.788813] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.801740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.802393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.802470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.802484] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.802742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.802972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.802983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.802992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.803002] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.815717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.816446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.816519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.816531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.816788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.817017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.817028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.817036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.817046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.829787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.830501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.830563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.830576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.830833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.831062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.831072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.929 [2024-12-05 12:17:41.831080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.929 [2024-12-05 12:17:41.831091] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.929 [2024-12-05 12:17:41.843788] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.929 [2024-12-05 12:17:41.844427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.929 [2024-12-05 12:17:41.844462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.929 [2024-12-05 12:17:41.844472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.929 [2024-12-05 12:17:41.844698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.929 [2024-12-05 12:17:41.844921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.929 [2024-12-05 12:17:41.844929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.844937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.844944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.857630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.858204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.858236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.858245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.858476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.858701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.858711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.858718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.858726] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.871630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.872323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.872386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.872399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.872669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.872900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.872911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.872919] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.872928] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.885646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.886290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.886320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.886329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.886562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.886790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.886801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.886809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.886817] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.899511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.900083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.900108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.900118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.900352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.900588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.900599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.900609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.900618] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.913522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.914214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.914276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.914290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.914560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.914791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.914802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.914811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.914821] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.927517] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.928104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.928133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.928142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.928368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.928600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.928610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.928618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.928626] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.941516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.941968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.941992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.942000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.942223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.942447] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.942466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.942483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.942490] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.955379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.956041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.956103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.956117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.956375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.956617] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.956629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.956638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.956647] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:16.930 [2024-12-05 12:17:41.969355] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:16.930 [2024-12-05 12:17:41.970069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:16.930 [2024-12-05 12:17:41.970131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:16.930 [2024-12-05 12:17:41.970144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:16.930 [2024-12-05 12:17:41.970401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:16.930 [2024-12-05 12:17:41.970644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:16.930 [2024-12-05 12:17:41.970655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:16.930 [2024-12-05 12:17:41.970663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:16.930 [2024-12-05 12:17:41.970673] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.192 [2024-12-05 12:17:41.983382] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.192 [2024-12-05 12:17:41.983864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.192 [2024-12-05 12:17:41.983897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.192 [2024-12-05 12:17:41.983907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.192 [2024-12-05 12:17:41.984134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.192 [2024-12-05 12:17:41.984357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.192 [2024-12-05 12:17:41.984366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.192 [2024-12-05 12:17:41.984374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:41.984381] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:41.997303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:41.998006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:41.998068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:41.998083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:41.998340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:41.998581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:41.998592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:41.998601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:41.998610] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.011309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.011914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.011944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.011952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.012177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.012400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.012409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.012417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.012426] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.025332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.025925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.025951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.025959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.026182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.026404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.026414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.026422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.026429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.039325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.039890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.039923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.039932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.040155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.040377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.040387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.040395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.040402] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.053290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.053878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.053940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.053953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.054210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.054439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.054450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.054471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.054481] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.067191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.067831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.067859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.067868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.068093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.068316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.068325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.068332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.068340] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.081263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.081899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.081924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.081933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.082164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.082387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.082396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.082403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.082410] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.095151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.095839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.095900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.095913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.096171] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.096400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.096410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.096419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.096429] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.109131] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.109814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.109877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.109890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.110147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.110376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.193 [2024-12-05 12:17:42.110386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.193 [2024-12-05 12:17:42.110395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.193 [2024-12-05 12:17:42.110404] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.193 [2024-12-05 12:17:42.123104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.193 [2024-12-05 12:17:42.123814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.193 [2024-12-05 12:17:42.123876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.193 [2024-12-05 12:17:42.123890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.193 [2024-12-05 12:17:42.124147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.193 [2024-12-05 12:17:42.124377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.124386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.124402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.124411] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.137111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.137818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.137882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.137895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.138153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.138382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.138393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.138402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.138412] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.151120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.151845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.151906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.151918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.152175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.152405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.152415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.152424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.152433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.164361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.164970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.165027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.165037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.165224] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.165383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.165391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.165398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.165406] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.177019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.177563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.177588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.177595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.177751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.177905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.177911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.177917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.177923] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.189670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.190232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.190279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.190289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.190480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.190639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.190646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.190652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.190658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.202384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.202938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.202983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.202992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.203170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.203327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.203334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.203340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.203347] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.215084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.215599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.215644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.215653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.215830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.215987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.215994] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.216000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.216006] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.227732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.194 [2024-12-05 12:17:42.228311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.194 [2024-12-05 12:17:42.228348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.194 [2024-12-05 12:17:42.228357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.194 [2024-12-05 12:17:42.228539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.194 [2024-12-05 12:17:42.228696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.194 [2024-12-05 12:17:42.228702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.194 [2024-12-05 12:17:42.228708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.194 [2024-12-05 12:17:42.228714] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.194 [2024-12-05 12:17:42.240429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.456 [2024-12-05 12:17:42.241001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.456 [2024-12-05 12:17:42.241038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.456 [2024-12-05 12:17:42.241047] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.456 [2024-12-05 12:17:42.241219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.456 [2024-12-05 12:17:42.241375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.456 [2024-12-05 12:17:42.241381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.456 [2024-12-05 12:17:42.241388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.456 [2024-12-05 12:17:42.241394] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.456 [2024-12-05 12:17:42.253114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.456 [2024-12-05 12:17:42.253741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.456 [2024-12-05 12:17:42.253777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.456 [2024-12-05 12:17:42.253785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.456 [2024-12-05 12:17:42.253956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.456 [2024-12-05 12:17:42.254117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.456 [2024-12-05 12:17:42.254123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.456 [2024-12-05 12:17:42.254129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.456 [2024-12-05 12:17:42.254135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.456 [2024-12-05 12:17:42.265855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.456 [2024-12-05 12:17:42.266404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.456 [2024-12-05 12:17:42.266437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.456 [2024-12-05 12:17:42.266446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.456 [2024-12-05 12:17:42.266625] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.456 [2024-12-05 12:17:42.266781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.456 [2024-12-05 12:17:42.266788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.456 [2024-12-05 12:17:42.266794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.456 [2024-12-05 12:17:42.266800] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.456 [2024-12-05 12:17:42.278522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.456 [2024-12-05 12:17:42.279106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.456 [2024-12-05 12:17:42.279138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.456 [2024-12-05 12:17:42.279146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.456 [2024-12-05 12:17:42.279315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.456 [2024-12-05 12:17:42.279478] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.279486] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.279491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.279497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.291211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.291789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.291820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.291829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.291998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.292153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.292159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.292169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.292175] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.303886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.304479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.304510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.304519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.304687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.304842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.304848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.304854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.304860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.316573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.317111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.317141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.317150] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.317318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.317480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.317487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.317492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.317498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.329345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.329936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.329966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.329974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.330142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.330297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.330303] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.330309] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.330314] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.342062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.342594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.342623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.342632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.342802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.342957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.342964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.342969] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.342975] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.354828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.355401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.355431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.355440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.355617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.355773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.355779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.355785] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.355790] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.367491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.368054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.368084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.368093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.368268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.368424] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.368430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.368436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.368442] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.380150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.380739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.380769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.380781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.380949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.381104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.381110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.381115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.381121] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.392834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.393315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.393345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.457 [2024-12-05 12:17:42.393353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.457 [2024-12-05 12:17:42.393533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.457 [2024-12-05 12:17:42.393689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.457 [2024-12-05 12:17:42.393695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.457 [2024-12-05 12:17:42.393701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.457 [2024-12-05 12:17:42.393707] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.457 [2024-12-05 12:17:42.405550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.457 [2024-12-05 12:17:42.406122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.457 [2024-12-05 12:17:42.406152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.406161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.406329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.406492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.406500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.406506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.406512] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.418221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.418798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.418828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.418837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.419005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.419164] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.419171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.419177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.419183] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.430889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.431478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.431509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.431517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.431687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.431842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.431849] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.431855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.431860] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.443568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.444150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.444180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.444188] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.444356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.444518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.444525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.444531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.444536] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.456235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.456847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.456877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.456885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.457053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.457208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.457214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.457223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.457229] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.468947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.469542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.469572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.469581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.469751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.469906] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.469913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.469918] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.469924] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.481631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.482107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.482136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.482144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.482312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.482480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.482487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.482493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.482498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.458 [2024-12-05 12:17:42.494344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.458 [2024-12-05 12:17:42.494882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.458 [2024-12-05 12:17:42.494913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.458 [2024-12-05 12:17:42.494921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.458 [2024-12-05 12:17:42.495089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.458 [2024-12-05 12:17:42.495244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.458 [2024-12-05 12:17:42.495250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.458 [2024-12-05 12:17:42.495256] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.458 [2024-12-05 12:17:42.495261] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.720 [2024-12-05 12:17:42.506977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.720 [2024-12-05 12:17:42.507558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-12-05 12:17:42.507588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.720 [2024-12-05 12:17:42.507597] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.720 [2024-12-05 12:17:42.507767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.720 [2024-12-05 12:17:42.507922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.720 [2024-12-05 12:17:42.507929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.720 [2024-12-05 12:17:42.507934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.720 [2024-12-05 12:17:42.507940] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.720 [2024-12-05 12:17:42.519657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.720 [2024-12-05 12:17:42.520258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-12-05 12:17:42.520287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.720 [2024-12-05 12:17:42.520296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.720 [2024-12-05 12:17:42.520470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.720 [2024-12-05 12:17:42.520626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.720 [2024-12-05 12:17:42.520633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.720 [2024-12-05 12:17:42.520639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.720 [2024-12-05 12:17:42.520645] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.720 [2024-12-05 12:17:42.532350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.720 [2024-12-05 12:17:42.532919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-12-05 12:17:42.532949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.720 [2024-12-05 12:17:42.532958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.720 [2024-12-05 12:17:42.533126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.720 [2024-12-05 12:17:42.533282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.720 [2024-12-05 12:17:42.533288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.720 [2024-12-05 12:17:42.533294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.720 [2024-12-05 12:17:42.533300] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.720 [2024-12-05 12:17:42.545012] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.720 [2024-12-05 12:17:42.545651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.720 [2024-12-05 12:17:42.545681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.720 [2024-12-05 12:17:42.545694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.720 [2024-12-05 12:17:42.545862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.546017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.546023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.546029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.546035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.557744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.558310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.558340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.558349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.558523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.558679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.558685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.558691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.558696] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.570406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.570928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.570957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.570966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.571134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.571288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.571295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.571300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.571306] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 6991.75 IOPS, 27.31 MiB/s [2024-12-05T11:17:42.770Z] [2024-12-05 12:17:42.583160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.583771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.583801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.583809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.583981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.584136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.584142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.584148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.584154] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.595880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.596484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.596514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.596523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.596691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.596846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.596853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.596858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.596864] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.608580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.609110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.609140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.609148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.609316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.609476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.609483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.609488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.609494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.621347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.621821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.621836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.621842] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.621994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.622146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.622151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.622160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.622166] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.634160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.634736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.634766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.634775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.634943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.635097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.635104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.635109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.635115] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.646826] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.647395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.647425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.647434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.647608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.647764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.647770] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.647777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.647782] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.659507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.660095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.660125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.660134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.660302] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.721 [2024-12-05 12:17:42.660465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.721 [2024-12-05 12:17:42.660473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.721 [2024-12-05 12:17:42.660478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.721 [2024-12-05 12:17:42.660484] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.721 [2024-12-05 12:17:42.672201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.721 [2024-12-05 12:17:42.672808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.721 [2024-12-05 12:17:42.672839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.721 [2024-12-05 12:17:42.672847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.721 [2024-12-05 12:17:42.673018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.673174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.673180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.673186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.673192] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.684898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.685380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.685408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.685417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.685600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.685756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.685763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.685768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.685774] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.697628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.698236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.698265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.698274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.698442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.698604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.698612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.698618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.698623] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.710323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.710891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.710924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.710932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.711100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.711255] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.711261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.711267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.711273] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.722980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.723554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.723584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.723593] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.723763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.723918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.723924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.723930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.723936] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.735641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.736208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.736238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.736246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.736414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.736575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.736583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.736589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.736595] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.748295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.748898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.748929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.748937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.749109] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.749264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.749270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.749276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.749282] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.722 [2024-12-05 12:17:42.760989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.722 [2024-12-05 12:17:42.761550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.722 [2024-12-05 12:17:42.761579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.722 [2024-12-05 12:17:42.761588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.722 [2024-12-05 12:17:42.761758] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.722 [2024-12-05 12:17:42.761913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.722 [2024-12-05 12:17:42.761920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.722 [2024-12-05 12:17:42.761926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.722 [2024-12-05 12:17:42.761932] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.985 [2024-12-05 12:17:42.773652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.985 [2024-12-05 12:17:42.774172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.985 [2024-12-05 12:17:42.774186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.985 [2024-12-05 12:17:42.774192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.985 [2024-12-05 12:17:42.774344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.985 [2024-12-05 12:17:42.774502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.985 [2024-12-05 12:17:42.774509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.985 [2024-12-05 12:17:42.774514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.985 [2024-12-05 12:17:42.774519] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.985 [2024-12-05 12:17:42.786358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.985 [2024-12-05 12:17:42.786826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.786840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.786845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.786997] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.787148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.787154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.787166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.787171] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.799009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.799378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.799390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.799395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.799553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.799706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.799711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.799716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.799721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.811699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.812182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.812195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.812200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.812351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.812509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.812515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.812520] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.812525] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.824354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.824877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.824906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.824915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.825083] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.825238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.825245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.825250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.825256] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.837114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.837744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.837774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.837783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.837951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.838106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.838112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.838118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.838124] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.849827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.850394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.850424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.850433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.850610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.850766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.850772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.850778] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.850784] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.862486] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.863053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.863083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.863091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.863259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.863414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.863421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.863426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.863432] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.875144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.875517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.875541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.875547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.875706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.875859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.875865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.875870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.875875] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.887871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.888438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.888472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.888481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.888649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.888804] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.888810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.888816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.888822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.986 [2024-12-05 12:17:42.900520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.986 [2024-12-05 12:17:42.901137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.986 [2024-12-05 12:17:42.901167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.986 [2024-12-05 12:17:42.901176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.986 [2024-12-05 12:17:42.901343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.986 [2024-12-05 12:17:42.901505] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.986 [2024-12-05 12:17:42.901513] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.986 [2024-12-05 12:17:42.901518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.986 [2024-12-05 12:17:42.901524] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.913225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.913817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.913847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.913856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.914028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.914183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.914190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.914196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.914202] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.925920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.926493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.926524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.926533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.926704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.926860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.926867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.926873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.926879] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.938598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.939090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.939105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.939110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.939262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.939414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.939421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.939426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.939431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.951281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.951770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.951783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.951789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.951940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.952092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.952097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.952106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.952111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.963961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.964522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.964552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.964561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.964731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.964886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.964893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.964899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.964904] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.976625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.977112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.977126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.977132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.977284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.977436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.977442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.977447] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.977452] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:42.989303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:42.989895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:42.989925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:42.989934] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:42.990102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:42.990257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:42.990264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:42.990270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:42.990276] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:43.001989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:43.002553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:43.002583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:43.002592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:43.002762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:43.002917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:43.002923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:43.002929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:43.002935] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:43.014642] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:43.015254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:43.015283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:43.015292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:43.015467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.987 [2024-12-05 12:17:43.015623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.987 [2024-12-05 12:17:43.015630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.987 [2024-12-05 12:17:43.015635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.987 [2024-12-05 12:17:43.015641] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:17.987 [2024-12-05 12:17:43.027342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:17.987 [2024-12-05 12:17:43.027944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:17.987 [2024-12-05 12:17:43.027974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:17.987 [2024-12-05 12:17:43.027983] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:17.987 [2024-12-05 12:17:43.028150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:17.988 [2024-12-05 12:17:43.028305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:17.988 [2024-12-05 12:17:43.028312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:17.988 [2024-12-05 12:17:43.028317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:17.988 [2024-12-05 12:17:43.028323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.040039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.040580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.040614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.040623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.040792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.040947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.040954] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.040959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.040965] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.052679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.053027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.053042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.053048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.053200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.053351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.053357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.053362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.053367] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.065360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.065819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.065832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.065837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.065989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.066141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.066147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.066152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.066157] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.078013] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.078507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.078520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.078525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.078680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.078832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.078838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.078843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.078848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.090709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.091153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.091166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.091171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.091323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.091481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.091487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.091492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.091497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.103346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.103772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.103785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.103790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.103942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.104094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.104100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.104106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.104110] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.115986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.116469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.116482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.116488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.250 [2024-12-05 12:17:43.116639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.250 [2024-12-05 12:17:43.116791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.250 [2024-12-05 12:17:43.116797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.250 [2024-12-05 12:17:43.116805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.250 [2024-12-05 12:17:43.116810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.250 [2024-12-05 12:17:43.128662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.250 [2024-12-05 12:17:43.129138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.250 [2024-12-05 12:17:43.129150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.250 [2024-12-05 12:17:43.129155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.129307] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.129463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.129469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.129474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.129478] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.141322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.141874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.141905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.141914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.142084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.142239] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.142246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.142252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.142262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.154001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.154577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.154607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.154616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.154786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.154941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.154948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.154953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.154959] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.166684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.167275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.167305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.167314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.167488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.167644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.167651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.167656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.167662] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.179411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.179853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.179882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.179891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.180058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.180213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.180220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.180225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.180231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.192098] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.192594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.192623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.192632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.192802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.192957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.192964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.192970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.192976] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.204898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.205391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.205410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.205415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.205572] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.205725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.205731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.205736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.205741] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.217595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.218113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.218143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.218151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.218319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.218481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.218488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.218494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.218499] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.230358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.230915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.230945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.230954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.231121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.231276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.251 [2024-12-05 12:17:43.231283] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.251 [2024-12-05 12:17:43.231289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.251 [2024-12-05 12:17:43.231295] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.251 [2024-12-05 12:17:43.243047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.251 [2024-12-05 12:17:43.243550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.251 [2024-12-05 12:17:43.243565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.251 [2024-12-05 12:17:43.243571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.251 [2024-12-05 12:17:43.243727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.251 [2024-12-05 12:17:43.243878] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.252 [2024-12-05 12:17:43.243884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.252 [2024-12-05 12:17:43.243889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.252 [2024-12-05 12:17:43.243894] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.252 [2024-12-05 12:17:43.255750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.252 [2024-12-05 12:17:43.256325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.252 [2024-12-05 12:17:43.256355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.252 [2024-12-05 12:17:43.256364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.252 [2024-12-05 12:17:43.256538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.252 [2024-12-05 12:17:43.256693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.252 [2024-12-05 12:17:43.256699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.252 [2024-12-05 12:17:43.256705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.252 [2024-12-05 12:17:43.256710] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.252 [2024-12-05 12:17:43.268416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.252 [2024-12-05 12:17:43.268755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.252 [2024-12-05 12:17:43.268772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.252 [2024-12-05 12:17:43.268778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.252 [2024-12-05 12:17:43.268930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.252 [2024-12-05 12:17:43.269082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.252 [2024-12-05 12:17:43.269088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.252 [2024-12-05 12:17:43.269093] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.252 [2024-12-05 12:17:43.269098] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.252 [2024-12-05 12:17:43.281105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.252 [2024-12-05 12:17:43.281604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.252 [2024-12-05 12:17:43.281617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.252 [2024-12-05 12:17:43.281623] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.252 [2024-12-05 12:17:43.281775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.252 [2024-12-05 12:17:43.281926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.252 [2024-12-05 12:17:43.281932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.252 [2024-12-05 12:17:43.281941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.252 [2024-12-05 12:17:43.281945] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.252 [2024-12-05 12:17:43.293806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.252 [2024-12-05 12:17:43.294316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.252 [2024-12-05 12:17:43.294329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.252 [2024-12-05 12:17:43.294334] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.252 [2024-12-05 12:17:43.294490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.252 [2024-12-05 12:17:43.294643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.252 [2024-12-05 12:17:43.294649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.252 [2024-12-05 12:17:43.294654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.252 [2024-12-05 12:17:43.294658] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.515 [2024-12-05 12:17:43.306526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.515 [2024-12-05 12:17:43.307011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-12-05 12:17:43.307024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.515 [2024-12-05 12:17:43.307029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.515 [2024-12-05 12:17:43.307181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.515 [2024-12-05 12:17:43.307332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.515 [2024-12-05 12:17:43.307338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.515 [2024-12-05 12:17:43.307343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.515 [2024-12-05 12:17:43.307348] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.515 [2024-12-05 12:17:43.319199] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.515 [2024-12-05 12:17:43.319793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-12-05 12:17:43.319823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.515 [2024-12-05 12:17:43.319832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.515 [2024-12-05 12:17:43.320000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.515 [2024-12-05 12:17:43.320155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.515 [2024-12-05 12:17:43.320162] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.515 [2024-12-05 12:17:43.320167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.515 [2024-12-05 12:17:43.320173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.515 [2024-12-05 12:17:43.331893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.515 [2024-12-05 12:17:43.332373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-12-05 12:17:43.332388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.515 [2024-12-05 12:17:43.332394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.515 [2024-12-05 12:17:43.332551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.515 [2024-12-05 12:17:43.332704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.515 [2024-12-05 12:17:43.332710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.515 [2024-12-05 12:17:43.332715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.515 [2024-12-05 12:17:43.332719] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.515 [2024-12-05 12:17:43.344575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.515 [2024-12-05 12:17:43.345110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.515 [2024-12-05 12:17:43.345140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.515 [2024-12-05 12:17:43.345148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.515 [2024-12-05 12:17:43.345316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.515 [2024-12-05 12:17:43.345477] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.515 [2024-12-05 12:17:43.345484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.515 [2024-12-05 12:17:43.345489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.515 [2024-12-05 12:17:43.345495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.357209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.357832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.357861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.357870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.358038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.358193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.358200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.358206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.358212] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.369932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.370529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.370563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.370572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.370742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.370897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.370904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.370909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.370915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.382640] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.383029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.383044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.383049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.383201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.383353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.383359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.383364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.383369] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.395376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.395852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.395882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.395890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.396058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.396213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.396220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.396225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.396231] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.408096] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.408666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.408698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.408706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.408882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.409038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.409044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.409049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.409055] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.420775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.421151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.421165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.421171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.421323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.421480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.421487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.421492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.421497] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.433492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.434125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.434155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.434164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.434331] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.434493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.434500] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.434508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.434515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.446222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.446701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.446716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.446722] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.446874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.447026] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.447032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.447041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.447046] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.458896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.459419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.459432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.459437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.459593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.459746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.516 [2024-12-05 12:17:43.459751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.516 [2024-12-05 12:17:43.459756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.516 [2024-12-05 12:17:43.459761] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.516 [2024-12-05 12:17:43.471610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.516 [2024-12-05 12:17:43.472054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.516 [2024-12-05 12:17:43.472066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.516 [2024-12-05 12:17:43.472072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.516 [2024-12-05 12:17:43.472223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.516 [2024-12-05 12:17:43.472375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.472381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.472386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.472390] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.484234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.484488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.484500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.484505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.484657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.484808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.484815] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.484820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.484824] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.496975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.497445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.497462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.497467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.497619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.497771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.497778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.497782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.497787] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.509633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.510017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.510029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.510035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.510186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.510337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.510343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.510348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.510352] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.522343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.522799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.522811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.522816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.522968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.523119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.523125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.523130] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.523135] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.534978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.535337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.535352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.535357] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.535513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.535665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.535671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.535676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.535680] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.547667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.548188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.548200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.548206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.548357] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.548512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.548517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.548522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.548527] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.517 [2024-12-05 12:17:43.560377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.517 [2024-12-05 12:17:43.560737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.517 [2024-12-05 12:17:43.560749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.517 [2024-12-05 12:17:43.560755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.517 [2024-12-05 12:17:43.560906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.517 [2024-12-05 12:17:43.561058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.517 [2024-12-05 12:17:43.561063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.517 [2024-12-05 12:17:43.561068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.517 [2024-12-05 12:17:43.561073] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.779 [2024-12-05 12:17:43.573068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.779 [2024-12-05 12:17:43.573419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.779 [2024-12-05 12:17:43.573432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.779 [2024-12-05 12:17:43.573437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.779 [2024-12-05 12:17:43.573596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.779 [2024-12-05 12:17:43.573751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.779 [2024-12-05 12:17:43.573757] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.779 [2024-12-05 12:17:43.573762] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.779 [2024-12-05 12:17:43.573766] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.779 5593.40 IOPS, 21.85 MiB/s [2024-12-05T11:17:43.828Z] [2024-12-05 12:17:43.585756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.779 [2024-12-05 12:17:43.586321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.779 [2024-12-05 12:17:43.586351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.779 [2024-12-05 12:17:43.586360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.779 [2024-12-05 12:17:43.586535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.779 [2024-12-05 12:17:43.586691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.779 [2024-12-05 12:17:43.586698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.779 [2024-12-05 12:17:43.586703] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.779 [2024-12-05 12:17:43.586709] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.779 [2024-12-05 12:17:43.598429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.779 [2024-12-05 12:17:43.598985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.779 [2024-12-05 12:17:43.599015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.779 [2024-12-05 12:17:43.599024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.779 [2024-12-05 12:17:43.599192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.779 [2024-12-05 12:17:43.599347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.779 [2024-12-05 12:17:43.599353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.599359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.599365] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.611086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.611615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.611629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.611635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.611788] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.611939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.611950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.611955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.611960] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.623820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.624305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.624318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.624324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.624482] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.624635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.624641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.624646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.624651] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.636528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.636984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.636998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.637003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.637155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.637307] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.637313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.637318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.637323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.649184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.649677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.649690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.649696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.649848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.650000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.650005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.650011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.650016] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.661880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.662420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.662450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.662467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.662637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.662792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.662799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.662804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.662810] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.674544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.674945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.674960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.674966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.675118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.675270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.675277] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.675282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.675287] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.687290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.687763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.687777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.687783] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.687935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.688087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.688093] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.688098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.688103] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.699982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.700464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.700481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.700487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.700639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.700791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.700797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.700802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.700806] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.712673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.712985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.712998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.713004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.780 [2024-12-05 12:17:43.713156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.780 [2024-12-05 12:17:43.713308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.780 [2024-12-05 12:17:43.713313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.780 [2024-12-05 12:17:43.713318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.780 [2024-12-05 12:17:43.713323] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.780 [2024-12-05 12:17:43.725323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.780 [2024-12-05 12:17:43.725709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.780 [2024-12-05 12:17:43.725722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.780 [2024-12-05 12:17:43.725727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.725878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.726030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.726035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.726040] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.726045] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.738051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.738533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.738545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.738550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.738705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.738857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.738863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.738868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.738873] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.750733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.751220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.751232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.751237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.751389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.751545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.751552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.751557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.751561] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.763416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.763913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.763926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.763931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.764082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.764234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.764239] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.764244] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.764249] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.776117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.776573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.776585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.776591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.776743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.776894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.776900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.776907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.776913] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.788774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.789247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.789259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.789264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.789415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.789578] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.789584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.789589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.789594] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.801448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.801929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.801941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.801946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.802098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.802250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.802255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.802261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.802265] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.814119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:18.781 [2024-12-05 12:17:43.814583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:18.781 [2024-12-05 12:17:43.814595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:18.781 [2024-12-05 12:17:43.814601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:18.781 [2024-12-05 12:17:43.814752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:18.781 [2024-12-05 12:17:43.814904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:18.781 [2024-12-05 12:17:43.814909] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:18.781 [2024-12-05 12:17:43.814914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:18.781 [2024-12-05 12:17:43.814919] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:18.781 [2024-12-05 12:17:43.826775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.042 [2024-12-05 12:17:43.827252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-12-05 12:17:43.827265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.042 [2024-12-05 12:17:43.827270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.042 [2024-12-05 12:17:43.827422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.042 [2024-12-05 12:17:43.827580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.042 [2024-12-05 12:17:43.827586] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.042 [2024-12-05 12:17:43.827592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.042 [2024-12-05 12:17:43.827596] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.042 [2024-12-05 12:17:43.839439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.042 [2024-12-05 12:17:43.839942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-12-05 12:17:43.839954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.042 [2024-12-05 12:17:43.839960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.042 [2024-12-05 12:17:43.840111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.042 [2024-12-05 12:17:43.840263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.042 [2024-12-05 12:17:43.840268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.042 [2024-12-05 12:17:43.840273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.042 [2024-12-05 12:17:43.840278] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.042 [2024-12-05 12:17:43.852119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.042 [2024-12-05 12:17:43.852663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-12-05 12:17:43.852694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.042 [2024-12-05 12:17:43.852702] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.042 [2024-12-05 12:17:43.852870] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.042 [2024-12-05 12:17:43.853025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.042 [2024-12-05 12:17:43.853031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.042 [2024-12-05 12:17:43.853037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.042 [2024-12-05 12:17:43.853043] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.042 [2024-12-05 12:17:43.864751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.042 [2024-12-05 12:17:43.865278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.042 [2024-12-05 12:17:43.865311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.042 [2024-12-05 12:17:43.865320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.042 [2024-12-05 12:17:43.865495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.042 [2024-12-05 12:17:43.865651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.042 [2024-12-05 12:17:43.865657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.042 [2024-12-05 12:17:43.865663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.042 [2024-12-05 12:17:43.865668] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.042 [2024-12-05 12:17:43.877379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.877904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.877934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.877942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.878110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.878265] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.878272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.878278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.878283] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.890148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.890713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.890743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.890752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.890919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.891074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.891080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.891086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.891092] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.902801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.903305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.903335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.903344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.903525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.903681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.903687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.903693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.903698] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.915545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.916112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.916142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.916151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.916319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.916480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.916487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.916493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.916498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.928193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.928791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.928821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.928830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.928998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.929153] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.929161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.929167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.929173] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.940901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.941474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.941505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.941514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.941682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.941837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.941844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.941854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.941861] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.953584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.954202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.954232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.954241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.954408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.954571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.954578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.954584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.954590] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.966313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.966769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.966784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.966789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.966942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.967095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.967101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.967106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.967111] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.978978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.043 [2024-12-05 12:17:43.979466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.043 [2024-12-05 12:17:43.979479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.043 [2024-12-05 12:17:43.979485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.043 [2024-12-05 12:17:43.979636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.043 [2024-12-05 12:17:43.979788] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.043 [2024-12-05 12:17:43.979794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.043 [2024-12-05 12:17:43.979799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.043 [2024-12-05 12:17:43.979804] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.043 [2024-12-05 12:17:43.991672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:43.992273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:43.992303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:43.992311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:43.992486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:43.992641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:43.992648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:43.992653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:43.992659] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.004369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.004864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.004894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.004903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.005070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.005225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.005232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.005238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.005243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.017093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.017673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.017703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.017712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.017880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.018035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.018041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.018047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.018053] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.029760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.030307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.030344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.030353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.030531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.030687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.030693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.030699] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.030704] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.042406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.042959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.042989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.042998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.043166] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.043321] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.043327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.043332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.043338] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.055045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.055656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.055686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.055694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.055862] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.056017] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.056023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.056029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.056035] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.067744] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.068340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.068370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.068379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.068558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.068713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.068720] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.068725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.068731] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.044 [2024-12-05 12:17:44.080438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.044 [2024-12-05 12:17:44.080944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.044 [2024-12-05 12:17:44.080975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.044 [2024-12-05 12:17:44.080984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.044 [2024-12-05 12:17:44.081151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.044 [2024-12-05 12:17:44.081306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.044 [2024-12-05 12:17:44.081313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.044 [2024-12-05 12:17:44.081318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.044 [2024-12-05 12:17:44.081324] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.306 [2024-12-05 12:17:44.093204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.306 [2024-12-05 12:17:44.093766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-12-05 12:17:44.093796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.306 [2024-12-05 12:17:44.093805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.306 [2024-12-05 12:17:44.093973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.306 [2024-12-05 12:17:44.094128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.306 [2024-12-05 12:17:44.094134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.306 [2024-12-05 12:17:44.094140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.306 [2024-12-05 12:17:44.094146] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.306 [2024-12-05 12:17:44.105859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.306 [2024-12-05 12:17:44.106434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-12-05 12:17:44.106469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.306 [2024-12-05 12:17:44.106478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.306 [2024-12-05 12:17:44.106646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.306 [2024-12-05 12:17:44.106801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.306 [2024-12-05 12:17:44.106807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.306 [2024-12-05 12:17:44.106816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.306 [2024-12-05 12:17:44.106822] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.306 [2024-12-05 12:17:44.118526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.306 [2024-12-05 12:17:44.119136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-12-05 12:17:44.119166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.306 [2024-12-05 12:17:44.119175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.306 [2024-12-05 12:17:44.119345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.306 [2024-12-05 12:17:44.119507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.306 [2024-12-05 12:17:44.119515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.306 [2024-12-05 12:17:44.119521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.306 [2024-12-05 12:17:44.119526] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.306 [2024-12-05 12:17:44.131226] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.306 [2024-12-05 12:17:44.131810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-12-05 12:17:44.131840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.306 [2024-12-05 12:17:44.131849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.306 [2024-12-05 12:17:44.132017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.306 [2024-12-05 12:17:44.132172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.306 [2024-12-05 12:17:44.132179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.306 [2024-12-05 12:17:44.132184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.306 [2024-12-05 12:17:44.132191] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1545634 Killed "${NVMF_APP[@]}" "$@" 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=1547341 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 1547341 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1547341 ']' 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:19.306 [2024-12-05 12:17:44.143913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:19.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:19.306 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:19.306 [2024-12-05 12:17:44.144418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-12-05 12:17:44.144433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.306 [2024-12-05 12:17:44.144438] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.306 [2024-12-05 12:17:44.144595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.306 [2024-12-05 12:17:44.144748] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.306 [2024-12-05 12:17:44.144755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.306 [2024-12-05 12:17:44.144761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.306 [2024-12-05 12:17:44.144767] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.306 [2024-12-05 12:17:44.156632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.306 [2024-12-05 12:17:44.157194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.306 [2024-12-05 12:17:44.157224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.306 [2024-12-05 12:17:44.157233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.306 [2024-12-05 12:17:44.157401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.306 [2024-12-05 12:17:44.157563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.306 [2024-12-05 12:17:44.157570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.306 [2024-12-05 12:17:44.157576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.306 [2024-12-05 12:17:44.157581] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.169306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.169897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.169927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.169936] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.170104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.170259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.170265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.170270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.170280] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.182001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.182558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.182588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.182596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.182767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.182922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.182929] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.182936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.182942] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.194668] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.195152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.195166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.195173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.195325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.195482] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.195489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.195495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.195501] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.205055] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:19.307 [2024-12-05 12:17:44.205113] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:19.307 [2024-12-05 12:17:44.207348] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.207977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.208007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.208016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.208184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.208340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.208346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.208356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.208362] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.220076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.220572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.220588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.220594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.220746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.220898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.220905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.220910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.220915] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.232815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.233380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.233410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.233419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.233595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.233751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.233758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.233763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.233769] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.245480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.245981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.245996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.246001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.307 [2024-12-05 12:17:44.246154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.307 [2024-12-05 12:17:44.246306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.307 [2024-12-05 12:17:44.246312] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.307 [2024-12-05 12:17:44.246318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.307 [2024-12-05 12:17:44.246322] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.307 [2024-12-05 12:17:44.258173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.307 [2024-12-05 12:17:44.258521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.307 [2024-12-05 12:17:44.258536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.307 [2024-12-05 12:17:44.258542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.258695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.258847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.258853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.258858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.258863] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.270857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.271416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.271446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.308 [2024-12-05 12:17:44.271462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.271634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.271789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.271796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.271801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.271807] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.283528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.284099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.284130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.308 [2024-12-05 12:17:44.284139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.284306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.284468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.284475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.284481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.284487] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.296207] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.296602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:19.308 [2024-12-05 12:17:44.296731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.296760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.308 [2024-12-05 12:17:44.296773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.296940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.297095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.297103] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.297108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.297114] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.308971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.309491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.309512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.308 [2024-12-05 12:17:44.309519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.309678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.309831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.309837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.309842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.309848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.321736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.322184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.322213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.308 [2024-12-05 12:17:44.322223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.322390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.322563] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.322571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.322576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.322582] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.325378] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:19.308 [2024-12-05 12:17:44.325398] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:19.308 [2024-12-05 12:17:44.325405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:19.308 [2024-12-05 12:17:44.325410] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:19.308 [2024-12-05 12:17:44.325415] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:19.308 [2024-12-05 12:17:44.326723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:19.308 [2024-12-05 12:17:44.326935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:19.308 [2024-12-05 12:17:44.326936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:19.308 [2024-12-05 12:17:44.334442] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.335051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.335082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.308 [2024-12-05 12:17:44.335091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.308 [2024-12-05 12:17:44.335260] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.308 [2024-12-05 12:17:44.335415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.308 [2024-12-05 12:17:44.335421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.308 [2024-12-05 12:17:44.335427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.308 [2024-12-05 12:17:44.335433] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.308 [2024-12-05 12:17:44.347147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.308 [2024-12-05 12:17:44.347741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.308 [2024-12-05 12:17:44.347772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.309 [2024-12-05 12:17:44.347781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.309 [2024-12-05 12:17:44.347950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.309 [2024-12-05 12:17:44.348105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.309 [2024-12-05 12:17:44.348111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.309 [2024-12-05 12:17:44.348117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.309 [2024-12-05 12:17:44.348123] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.570 [2024-12-05 12:17:44.359836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.570 [2024-12-05 12:17:44.360435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-12-05 12:17:44.360471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.570 [2024-12-05 12:17:44.360481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.570 [2024-12-05 12:17:44.360652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.570 [2024-12-05 12:17:44.360808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.570 [2024-12-05 12:17:44.360814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.570 [2024-12-05 12:17:44.360821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.570 [2024-12-05 12:17:44.360827] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.570 [2024-12-05 12:17:44.372539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.570 [2024-12-05 12:17:44.373113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-12-05 12:17:44.373142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.570 [2024-12-05 12:17:44.373151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.570 [2024-12-05 12:17:44.373319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.570 [2024-12-05 12:17:44.373480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.570 [2024-12-05 12:17:44.373488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.570 [2024-12-05 12:17:44.373493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.570 [2024-12-05 12:17:44.373500] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.570 [2024-12-05 12:17:44.385216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.570 [2024-12-05 12:17:44.385835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-12-05 12:17:44.385865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.570 [2024-12-05 12:17:44.385874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.570 [2024-12-05 12:17:44.386043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.570 [2024-12-05 12:17:44.386198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.570 [2024-12-05 12:17:44.386205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.570 [2024-12-05 12:17:44.386211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.570 [2024-12-05 12:17:44.386217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.570 [2024-12-05 12:17:44.397936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.570 [2024-12-05 12:17:44.398547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.570 [2024-12-05 12:17:44.398577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.570 [2024-12-05 12:17:44.398586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.570 [2024-12-05 12:17:44.398757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.570 [2024-12-05 12:17:44.398912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.570 [2024-12-05 12:17:44.398919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.570 [2024-12-05 12:17:44.398925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.570 [2024-12-05 12:17:44.398930] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.570 [2024-12-05 12:17:44.410637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.411155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.411185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.411198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.411367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.411529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.411538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.411544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.411549] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.423393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.423755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.423770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.423776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.423928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.424081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.424086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.424091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.424096] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.436083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.436706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.436737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.436746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.436915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.437070] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.437076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.437082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.437088] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.448803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.449410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.449440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.449449] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.449628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.449787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.449794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.449801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.449808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.461521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.462107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.462137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.462145] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.462313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.462475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.462483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.462488] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.462494] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.474200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.474787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.474818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.474826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.474995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.475150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.475156] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.475161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.475167] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.486879] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.487356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.487385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.487394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.487569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.487725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.487731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.487740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.487747] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.499608] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.500154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.500184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.571 [2024-12-05 12:17:44.500193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.571 [2024-12-05 12:17:44.500361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.571 [2024-12-05 12:17:44.500521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.571 [2024-12-05 12:17:44.500528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.571 [2024-12-05 12:17:44.500533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.571 [2024-12-05 12:17:44.500539] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.571 [2024-12-05 12:17:44.512240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.571 [2024-12-05 12:17:44.512596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.571 [2024-12-05 12:17:44.512612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.512617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.512770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.512921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.512927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.512932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.512937] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.524921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.525381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.525394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.525399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.525555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.525707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.525713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.525718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.525722] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.537561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.538188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.538218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.538226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.538395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.538556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.538563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.538568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.538574] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.550271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.550839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.550869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.550878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.551045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.551200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.551206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.551212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.551217] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.562923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.563502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.563533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.563541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.563712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.563867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.563873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.563879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.563885] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.575601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.576186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.576216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.576228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.576396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.576560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.576568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.576573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.576579] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 4661.17 IOPS, 18.21 MiB/s [2024-12-05T11:17:44.621Z] [2024-12-05 12:17:44.588273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.588732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.588747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.588753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.588905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.589057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.589063] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.589067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.589072] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.600921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.572 [2024-12-05 12:17:44.601379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.572 [2024-12-05 12:17:44.601392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.572 [2024-12-05 12:17:44.601397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.572 [2024-12-05 12:17:44.601554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.572 [2024-12-05 12:17:44.601706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.572 [2024-12-05 12:17:44.601711] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.572 [2024-12-05 12:17:44.601717] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.572 [2024-12-05 12:17:44.601721] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.572 [2024-12-05 12:17:44.613556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.573 [2024-12-05 12:17:44.614108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.573 [2024-12-05 12:17:44.614138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.573 [2024-12-05 12:17:44.614147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.573 [2024-12-05 12:17:44.614315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.573 [2024-12-05 12:17:44.614480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.573 [2024-12-05 12:17:44.614487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.573 [2024-12-05 12:17:44.614493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.573 [2024-12-05 12:17:44.614498] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.834 [2024-12-05 12:17:44.626200] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.834 [2024-12-05 12:17:44.626827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.834 [2024-12-05 12:17:44.626857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.834 [2024-12-05 12:17:44.626866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.834 [2024-12-05 12:17:44.627034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.834 [2024-12-05 12:17:44.627189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.834 [2024-12-05 12:17:44.627196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.834 [2024-12-05 12:17:44.627201] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.834 [2024-12-05 12:17:44.627207] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.834 [2024-12-05 12:17:44.638945] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.834 [2024-12-05 12:17:44.639518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.834 [2024-12-05 12:17:44.639548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.834 [2024-12-05 12:17:44.639557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.834 [2024-12-05 12:17:44.639727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.834 [2024-12-05 12:17:44.639883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.834 [2024-12-05 12:17:44.639889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.834 [2024-12-05 12:17:44.639895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.834 [2024-12-05 12:17:44.639901] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.834 [2024-12-05 12:17:44.651682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.834 [2024-12-05 12:17:44.652281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.834 [2024-12-05 12:17:44.652311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.834 [2024-12-05 12:17:44.652320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.834 [2024-12-05 12:17:44.652494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.834 [2024-12-05 12:17:44.652650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.834 [2024-12-05 12:17:44.652657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.834 [2024-12-05 12:17:44.652666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.834 [2024-12-05 12:17:44.652672] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.834 [2024-12-05 12:17:44.664381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.834 [2024-12-05 12:17:44.664845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.834 [2024-12-05 12:17:44.664860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.834 [2024-12-05 12:17:44.664866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.834 [2024-12-05 12:17:44.665018] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.834 [2024-12-05 12:17:44.665170] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.834 [2024-12-05 12:17:44.665176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.834 [2024-12-05 12:17:44.665181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.834 [2024-12-05 12:17:44.665186] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.834 [2024-12-05 12:17:44.677051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.834 [2024-12-05 12:17:44.677508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.834 [2024-12-05 12:17:44.677522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.834 [2024-12-05 12:17:44.677527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.834 [2024-12-05 12:17:44.677679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.834 [2024-12-05 12:17:44.677831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.834 [2024-12-05 12:17:44.677837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.834 [2024-12-05 12:17:44.677843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.834 [2024-12-05 12:17:44.677848] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.834 [2024-12-05 12:17:44.689702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.834 [2024-12-05 12:17:44.690171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.690184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.690190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.690342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.690498] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.690506] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.690511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.690515] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.702365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.702922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.702935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.702941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.703094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.703247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.703252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.703257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.703262] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.715102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.715441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.715459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.715466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.715618] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.715769] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.715775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.715780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.715785] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.727769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.728407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.728437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.728446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.728624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.728779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.728786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.728792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.728799] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.740507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.741072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.741102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.741115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.741283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.741438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.741444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.741449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.741460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.753172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.753757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.753787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.753796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.753964] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.754119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.754126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.754131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.754137] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.765854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.766422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.766452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.766468] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.766636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.766791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.766797] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.766802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.766808] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.835 [2024-12-05 12:17:44.778528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.835 [2024-12-05 12:17:44.779118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.835 [2024-12-05 12:17:44.779147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.835 [2024-12-05 12:17:44.779156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.835 [2024-12-05 12:17:44.779324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.835 [2024-12-05 12:17:44.779489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.835 [2024-12-05 12:17:44.779497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.835 [2024-12-05 12:17:44.779502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.835 [2024-12-05 12:17:44.779508] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.791221] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.791847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.791877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.791886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.792054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.792209] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.792215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.792220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.792227] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.803939] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.804435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.804449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.804458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.804610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.804762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.804768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.804774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.804779] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.816631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.817121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.817134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.817140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.817292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.817443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.817449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.817462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.817467] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.829314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.829828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.829840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.829846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.829998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.830150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.830155] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.830161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.830165] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.842011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.842463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.842475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.842481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.842634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.842786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.842791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.842796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.842801] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.854650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.855155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.855167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.855172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.855323] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.855479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.855485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.855490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.855495] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.867351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.867922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.867953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.867962] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.868131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.868286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.868293] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.868298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.836 [2024-12-05 12:17:44.868304] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:19.836 [2024-12-05 12:17:44.880035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:19.836 [2024-12-05 12:17:44.880680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:19.836 [2024-12-05 12:17:44.880710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:19.836 [2024-12-05 12:17:44.880719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:19.836 [2024-12-05 12:17:44.880888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:19.836 [2024-12-05 12:17:44.881043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:19.836 [2024-12-05 12:17:44.881049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:19.836 [2024-12-05 12:17:44.881055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:19.837 [2024-12-05 12:17:44.881060] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.097 [2024-12-05 12:17:44.892794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.097 [2024-12-05 12:17:44.893282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-12-05 12:17:44.893297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.097 [2024-12-05 12:17:44.893302] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.097 [2024-12-05 12:17:44.893459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.097 [2024-12-05 12:17:44.893612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.097 [2024-12-05 12:17:44.893618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.097 [2024-12-05 12:17:44.893624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.097 [2024-12-05 12:17:44.893629] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.097 [2024-12-05 12:17:44.905483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.097 [2024-12-05 12:17:44.906048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.097 [2024-12-05 12:17:44.906078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.097 [2024-12-05 12:17:44.906090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.097 [2024-12-05 12:17:44.906258] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.097 [2024-12-05 12:17:44.906413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.906419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.906425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.906431] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 [2024-12-05 12:17:44.918214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.918720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.918727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.918880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.919032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.919039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.919044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.919049] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 [2024-12-05 12:17:44.930905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.931400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.931414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.931419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.931576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.931728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.931734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.931740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.931744] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 [2024-12-05 12:17:44.943594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.944065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.944079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.944084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.944236] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.944391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.944398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.944403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.944407] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 [2024-12-05 12:17:44.956265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.956703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.956733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.956743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.956911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.957066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.957074] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.957080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.957087] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 [2024-12-05 12:17:44.968952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.969449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.969469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.969474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.969626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.969779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.969784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.969789] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.969794] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 [2024-12-05 12:17:44.981655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.982110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.982123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.982128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.982281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.982435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.982441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.982451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.982460] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.098 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:20.098 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:34:20.098 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:20.098 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:20.098 12:17:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.098 [2024-12-05 12:17:44.994340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.098 [2024-12-05 12:17:44.994892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.098 [2024-12-05 12:17:44.994905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.098 [2024-12-05 12:17:44.994911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.098 [2024-12-05 12:17:44.995063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.098 [2024-12-05 12:17:44.995214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.098 [2024-12-05 12:17:44.995220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.098 [2024-12-05 12:17:44.995227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.098 [2024-12-05 12:17:44.995232] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 [2024-12-05 12:17:45.007086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 [2024-12-05 12:17:45.007550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.007564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.007570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.007722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.099 [2024-12-05 12:17:45.007874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.099 [2024-12-05 12:17:45.007880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.099 [2024-12-05 12:17:45.007885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.099 [2024-12-05 12:17:45.007890] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 [2024-12-05 12:17:45.019746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 [2024-12-05 12:17:45.020291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.020321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.020330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.020505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.099 [2024-12-05 12:17:45.020660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.099 [2024-12-05 12:17:45.020672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.099 [2024-12-05 12:17:45.020677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.099 [2024-12-05 12:17:45.020683] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 [2024-12-05 12:17:45.032396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:20.099 [2024-12-05 12:17:45.032861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.032892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.032900] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.033068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:20.099 [2024-12-05 12:17:45.033223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.099 [2024-12-05 12:17:45.033231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.099 [2024-12-05 12:17:45.033237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.099 [2024-12-05 12:17:45.033243] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 [2024-12-05 12:17:45.039383] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.099 [2024-12-05 12:17:45.045105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.099 [2024-12-05 12:17:45.045640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.045655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.045661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.045813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 [2024-12-05 12:17:45.045965] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.099 [2024-12-05 12:17:45.045971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.099 [2024-12-05 12:17:45.045976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.099 [2024-12-05 12:17:45.045981] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 [2024-12-05 12:17:45.057834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 [2024-12-05 12:17:45.058416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.058449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.058465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.058636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.099 [2024-12-05 12:17:45.058792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.099 [2024-12-05 12:17:45.058798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.099 [2024-12-05 12:17:45.058804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.099 [2024-12-05 12:17:45.058809] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 [2024-12-05 12:17:45.070527] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 [2024-12-05 12:17:45.071122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.071152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.071161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.071329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.099 [2024-12-05 12:17:45.071490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.099 [2024-12-05 12:17:45.071497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.099 [2024-12-05 12:17:45.071503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.099 [2024-12-05 12:17:45.071509] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.099 Malloc0 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.099 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.099 [2024-12-05 12:17:45.083230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.099 [2024-12-05 12:17:45.083699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.099 [2024-12-05 12:17:45.083728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.099 [2024-12-05 12:17:45.083737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.099 [2024-12-05 12:17:45.083905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.100 [2024-12-05 12:17:45.084060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.100 [2024-12-05 12:17:45.084066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.100 [2024-12-05 12:17:45.084072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.100 [2024-12-05 12:17:45.084078] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.100 [2024-12-05 12:17:45.095953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.100 [2024-12-05 12:17:45.096561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:20.100 [2024-12-05 12:17:45.096591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x215f010 with addr=10.0.0.2, port=4420 00:34:20.100 [2024-12-05 12:17:45.096600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x215f010 is same with the state(6) to be set 00:34:20.100 [2024-12-05 12:17:45.096771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x215f010 (9): Bad file descriptor 00:34:20.100 [2024-12-05 12:17:45.096926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:34:20.100 [2024-12-05 12:17:45.096933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:34:20.100 [2024-12-05 12:17:45.096938] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:34:20.100 [2024-12-05 12:17:45.096944] bdev_nvme.c:2284:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:20.100 [2024-12-05 12:17:45.105233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:20.100 [2024-12-05 12:17:45.108665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.100 12:17:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1546165 00:34:20.359 [2024-12-05 12:17:45.174304] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:34:21.559 4783.29 IOPS, 18.68 MiB/s [2024-12-05T11:17:47.990Z] 5833.25 IOPS, 22.79 MiB/s [2024-12-05T11:17:48.932Z] 6626.22 IOPS, 25.88 MiB/s [2024-12-05T11:17:49.875Z] 7281.90 IOPS, 28.44 MiB/s [2024-12-05T11:17:50.819Z] 7811.18 IOPS, 30.51 MiB/s [2024-12-05T11:17:51.764Z] 8252.67 IOPS, 32.24 MiB/s [2024-12-05T11:17:52.706Z] 8629.08 IOPS, 33.71 MiB/s [2024-12-05T11:17:53.648Z] 8958.14 IOPS, 34.99 MiB/s [2024-12-05T11:17:53.648Z] 9248.73 IOPS, 36.13 MiB/s 00:34:28.599 Latency(us) 00:34:28.599 [2024-12-05T11:17:53.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:28.599 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:28.599 Verification LBA range: start 0x0 length 0x4000 00:34:28.599 Nvme1n1 : 15.01 9251.65 36.14 13132.34 0.00 5698.79 563.20 15400.96 00:34:28.599 [2024-12-05T11:17:53.648Z] =================================================================================================================== 00:34:28.599 [2024-12-05T11:17:53.648Z] Total : 9251.65 36.14 13132.34 0.00 5698.79 563.20 15400.96 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:28.861 rmmod nvme_tcp 00:34:28.861 rmmod nvme_fabrics 00:34:28.861 rmmod nvme_keyring 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 1547341 ']' 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 1547341 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1547341 ']' 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1547341 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1547341 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1547341' 00:34:28.861 killing process with pid 1547341 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1547341 00:34:28.861 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1547341 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@254 -- # local dev 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:29.122 12:17:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # return 0 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@274 -- # iptr 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-save 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-restore 00:34:31.035 00:34:31.035 real 0m28.297s 00:34:31.035 user 1m3.339s 00:34:31.035 sys 0m7.684s 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:31.035 12:17:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:31.035 ************************************ 00:34:31.035 END TEST nvmf_bdevperf 00:34:31.035 ************************************ 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:31.297 ************************************ 00:34:31.297 START TEST nvmf_target_disconnect 00:34:31.297 ************************************ 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:31.297 * Looking for test storage... 00:34:31.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:34:31.297 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.298 --rc genhtml_branch_coverage=1 00:34:31.298 --rc genhtml_function_coverage=1 00:34:31.298 --rc genhtml_legend=1 00:34:31.298 --rc geninfo_all_blocks=1 00:34:31.298 --rc geninfo_unexecuted_blocks=1 00:34:31.298 00:34:31.298 ' 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.298 --rc genhtml_branch_coverage=1 00:34:31.298 --rc genhtml_function_coverage=1 00:34:31.298 --rc genhtml_legend=1 00:34:31.298 --rc geninfo_all_blocks=1 00:34:31.298 --rc geninfo_unexecuted_blocks=1 00:34:31.298 00:34:31.298 ' 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.298 --rc genhtml_branch_coverage=1 00:34:31.298 --rc genhtml_function_coverage=1 00:34:31.298 --rc genhtml_legend=1 00:34:31.298 --rc geninfo_all_blocks=1 00:34:31.298 --rc geninfo_unexecuted_blocks=1 00:34:31.298 00:34:31.298 ' 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:31.298 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:31.298 --rc genhtml_branch_coverage=1 00:34:31.298 --rc genhtml_function_coverage=1 00:34:31.298 --rc genhtml_legend=1 00:34:31.298 --rc geninfo_all_blocks=1 00:34:31.298 --rc geninfo_unexecuted_blocks=1 00:34:31.298 00:34:31.298 ' 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:31.298 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:34:31.559 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:31.559 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:31.559 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:31.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:34:31.560 12:17:56 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:34:39.705 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:34:39.705 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:34:39.705 Found net devices under 0000:4b:00.0: cvl_0_0 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:39.705 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:34:39.706 Found net devices under 0000:4b:00.1: cvl_0_1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:39.706 10.0.0.1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:39.706 10.0.0.2 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:39.706 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:39.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:39.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.654 ms 00:34:39.707 00:34:39.707 --- 10.0.0.1 ping statistics --- 00:34:39.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.707 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:39.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:39.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:34:39.707 00:34:39.707 --- 10.0.0.2 ping statistics --- 00:34:39.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:39.707 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:34:39.707 ' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:39.707 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.708 12:18:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.708 ************************************ 00:34:39.708 START TEST nvmf_target_disconnect_tc1 00:34:39.708 ************************************ 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:39.708 [2024-12-05 12:18:04.156612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:39.708 [2024-12-05 12:18:04.156709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17a0ae0 with addr=10.0.0.2, port=4420 00:34:39.708 [2024-12-05 12:18:04.156743] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:39.708 [2024-12-05 12:18:04.156762] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:39.708 [2024-12-05 12:18:04.156776] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:34:39.708 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:39.708 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:39.708 Initializing NVMe Controllers 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:34:39.708 00:34:39.708 real 0m0.142s 00:34:39.708 user 0m0.065s 00:34:39.708 sys 0m0.077s 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:39.708 ************************************ 00:34:39.708 END TEST nvmf_target_disconnect_tc1 00:34:39.708 ************************************ 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.708 ************************************ 00:34:39.708 START TEST nvmf_target_disconnect_tc2 00:34:39.708 ************************************ 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1553513 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1553513 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1553513 ']' 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:39.708 12:18:04 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:39.708 [2024-12-05 12:18:04.324515] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:39.708 [2024-12-05 12:18:04.324575] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:39.708 [2024-12-05 12:18:04.425862] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.708 [2024-12-05 12:18:04.477858] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.708 [2024-12-05 12:18:04.477905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.708 [2024-12-05 12:18:04.477914] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.708 [2024-12-05 12:18:04.477922] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.708 [2024-12-05 12:18:04.477928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.708 [2024-12-05 12:18:04.480010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:39.708 [2024-12-05 12:18:04.480055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:39.708 [2024-12-05 12:18:04.480215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:39.708 [2024-12-05 12:18:04.480216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.290 Malloc0 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.290 [2024-12-05 12:18:05.238811] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.290 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.291 [2024-12-05 12:18:05.279196] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1553862 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:40.291 12:18:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:42.910 12:18:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1553513 00:34:42.910 12:18:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Write completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 [2024-12-05 12:18:07.318010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.910 Read completed with error (sct=0, sc=8) 00:34:42.910 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Write completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Write completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Write completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Write completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Write completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 Read completed with error (sct=0, sc=8) 00:34:42.911 starting I/O failed 00:34:42.911 [2024-12-05 12:18:07.318290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:34:42.911 [2024-12-05 12:18:07.318866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.319555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.319910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.319969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.320350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.320363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.320733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.320794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.321104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.321118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.321218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.321230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.321693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.321757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.322111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.322122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.322494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.322530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.322907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.322915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.323267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.323277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.323742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.323800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.324135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.324146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.324710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.324768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.325174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.325184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.325427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.325436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.325818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.325828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.326183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.326192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.326416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.326425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.326767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.326777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.326986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.326995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.327343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.327352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.327741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.327752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.328093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.328102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.328458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.328469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.328850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.328859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.329183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.329192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.329554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.329563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.329948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.329957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.330303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.330312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.330640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.330648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.330981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.330988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.331333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.331341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.331656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.331664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.331871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.331879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.332102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.332109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.332227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.332235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.332427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.332435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.332767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.332776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.333082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.333090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.333407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.333415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.333696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.333704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.333925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.333933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.334326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.334334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.334662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.334671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.335026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.335036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.335244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.335251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.335471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.335479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.335691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.335699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.335981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.335989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.336341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.336349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.336749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.336756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.336967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.336975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.911 [2024-12-05 12:18:07.337309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.911 [2024-12-05 12:18:07.337316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.911 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.337631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.337639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.337988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.337995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.338308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.338316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.338710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.338718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.339055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.339063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.339415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.339423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.339625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.339633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.339928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.339942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.340262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.340269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.340602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.340610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.340941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.340949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.341267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.341275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.341607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.341614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.341942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.341949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.342259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.342266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.342591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.342599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.342836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.342845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.343034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.343042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.343394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.343402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.343750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.343757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.344035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.344042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.344375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.344383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.344702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.344711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.345035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.345042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.345365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.345373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.345562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.345912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.345920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.346148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.346157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.346481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.346489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.346795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.346803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.347130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.347137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.347332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.347341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.347773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.347781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.348166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.348175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.348490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.348498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.348823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.348838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.349025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.349033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.349364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.349372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.349727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.349734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.350141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.350149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.350446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.350463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.350824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.350832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.351240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.351247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.351565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.351573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.351905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.351914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.352105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.352113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.352426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.352435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.352528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.352537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.352696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.352705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.352993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.353002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.353367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.353376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.353720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.353728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.353939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.353946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.354247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.354443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.354451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.354669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.354678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.354953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.354962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.355319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.355328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.355645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.355654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.355984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.355996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.356305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.356314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.356633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.356642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.356843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.356852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.357174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.357183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.357548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.357557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.357897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.357906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.358120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.358129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.358472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.358481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.358793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.358802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.359109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.359118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.359439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.359448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.912 [2024-12-05 12:18:07.359666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.912 [2024-12-05 12:18:07.359676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.912 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.359958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.359967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.360282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.360291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.360616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.360625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.360950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.360960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.361324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.361333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.361679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.361688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.361895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.361904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.362086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.362094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.362297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.362306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.362632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.362641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.362966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.362976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.363291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.363300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.363626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.363635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.363817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.363826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.364148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.364157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.364484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.364493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.364809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.364818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.365145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.365154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.365363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.365372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.365690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.365700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.366038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.366047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.366233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.366242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.366475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.366484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.366700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.366709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.367036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.367045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.367256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.367266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.367594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.367603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.367929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.367938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.368255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.368265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.368575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.368584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.368926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.368936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.369235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.369244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.369585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.369594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.369773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.369782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.370101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.370111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.370424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.370433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.370623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.370633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.370932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.370941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.371145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.371154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.371465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.371474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.371784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.371791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.372119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.372127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.372441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.372448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.372845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.372853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.373156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.373552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.373561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.373875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.373883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.374213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.374221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.374433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.374441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.374785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.374794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.375110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.375118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.375425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.375432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.375742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.375750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.376078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.376086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.376414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.376421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.376710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.376721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.376924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.376933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.377254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.377261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.377432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.377439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.377736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.377745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.378047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.378056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.378250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.378258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.378538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.378546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.378886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.378894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.379223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.379230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.913 [2024-12-05 12:18:07.379534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.913 [2024-12-05 12:18:07.379542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.913 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.379853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.379860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.380177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.380185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.380510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.380518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.380845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.380853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.381210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.381217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.381512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.381520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.381862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.381869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.382082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.382089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.382434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.382442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.382762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.382772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.383136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.383144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.383470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.383479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.383761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.383768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.384069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.384076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.384444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.384451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.384774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.384782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.385144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.385152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.385464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.385472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.385790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.385797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.386122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.386130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.386339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.386347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.386678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.386686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.387006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.387014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.387325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.387333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.387637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.387646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.387986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.387993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.388318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.388326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.388684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.388692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.388846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.388853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.389125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.389133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.389440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.389449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.389769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.389777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.390116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.390123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.390435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.390443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.390756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.390764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.390953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.390960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.391336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.391343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.391589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.391598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.391910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.391919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.391996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.392004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.392311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.392320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.392634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.392643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.392841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.392850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.393213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.393220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.393533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.393541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.393880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.393887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.394207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.394215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.394429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.394438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.394717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.394724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.395048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.395056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.395379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.395386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.395716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.395724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.396043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.396050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.396372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.396380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.396715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.396723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.397037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.397045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.397247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.397255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.397592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.397603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.397809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.397816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.398021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.398028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.398350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.398358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.398675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.398683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.398933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.398941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.399165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.399172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.399480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.399488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.399913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.399920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.400091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.400099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.400395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.400404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.400850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.400857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.401215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.401223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.401423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.401432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.401684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.914 [2024-12-05 12:18:07.401693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.914 qpair failed and we were unable to recover it. 00:34:42.914 [2024-12-05 12:18:07.402036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.402045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.402263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.402272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.402658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.402666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.402986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.402994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.403342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.403350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.403548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.403556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.403949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.403956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.404279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.404286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.404519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.404526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.404852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.404859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.405176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.405184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.405515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.405522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.405817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.405825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.406146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.406153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.406467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.406475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.406864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.406872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.407177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.407186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.407381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.407389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.407696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.407704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.408025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.408032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.408360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.408368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.408710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.408717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.408925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.408933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.409260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.409268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.409597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.409606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.409953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.409962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.410262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.410273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.410612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.410620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.410925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.410934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.411170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.411177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.411504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.411512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.411864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.411872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.412178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.412187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.412515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.412523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.412845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.412852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.413181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.413188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.413493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.413501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.413845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.413852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.414172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.414180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.414545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.414553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.414873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.414881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.415204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.415212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.415609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.415618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.415938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.415946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.416268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.416275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.416592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.416600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.416922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.416930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.417139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.417146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.417499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.417506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.417843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.417851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.418047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.418055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.418384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.418391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.418609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.418617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.418775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.418785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.418995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.419003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.419356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.419364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.419711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.419718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.420022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.420031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.420382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.420391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.420722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.420731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.420926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.420934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.421279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.421287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.421609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.421616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.421950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.421958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.422123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.422133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.422487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.422495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.422900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.422908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.423133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.423140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.915 qpair failed and we were unable to recover it. 00:34:42.915 [2024-12-05 12:18:07.423407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.915 [2024-12-05 12:18:07.423416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.423756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.423765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.424073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.424082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.424442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.424450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.424769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.424777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.425099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.425106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.425421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.425429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.425752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.425760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.426085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.426092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.426438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.426445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.426764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.426773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.427101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.427109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.427445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.427462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.427687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.427695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.428026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.428034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.428414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.428421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.428733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.428742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.429066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.429074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.429280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.429287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.429600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.429609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.429946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.429954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.430151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.430159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.430519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.430526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.430833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.430840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.431165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.431173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.431515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.431523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.431842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.431852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.432188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.432196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.432519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.432527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.432851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.432859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.433183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.433190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.433513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.433521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.433911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.433918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.434211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.434220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.434535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.434542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.434864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.434873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.435188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.435196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.435398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.435407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.435672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.435682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.435980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.435989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.436291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.436298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.436675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.436684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.436988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.436995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.437316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.437323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.437668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.437675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.437879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.437886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.438235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.438244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.438506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.438513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.438869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.438877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.439102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.439109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.439490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.439498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.439670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.439679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.439983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.439990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.440333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.440341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.440642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.440649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.440834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.440842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.441107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.441115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.441448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.441475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.441785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.441793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.442127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.442134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.442446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.442461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.442791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.442798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.443121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.443129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.443194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.443202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.443405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.443413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.443735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.443743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.444069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.444077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.444477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.444487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.444833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.444840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.445040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.445047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.445375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.445382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.445719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.445727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.446049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.446057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.446381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.916 [2024-12-05 12:18:07.446389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.916 qpair failed and we were unable to recover it. 00:34:42.916 [2024-12-05 12:18:07.446754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.446761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.447093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.447101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.447424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.447433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.447762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.447769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.448076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.448085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.448391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.448400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.448713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.448722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.449047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.449056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.449412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.449421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.449741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.449750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.449957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.449965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.450157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.450165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.450581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.450588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.450893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.450902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.451094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.451102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.451427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.451435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.451756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.451763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.452071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.452079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.452414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.452422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.452723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.452731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.453051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.453062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.453246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.453255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.453547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.453555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.453866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.453881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.454196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.454203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.454522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.454530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.454840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.454847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.455159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.455167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.455380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.455387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.455720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.455728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.456088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.456096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.456342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.456350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.456687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.456694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.456998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.457006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.457335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.457343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.457769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.457777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.458102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.458109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.458504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.458512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.458829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.458836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.459154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.459162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.459485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.459492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.459811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.459819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.460145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.460152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.460506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.460514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.460824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.460831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.461159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.461166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.461491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.461499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.461816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.461824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.462152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.462159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.462361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.462368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.462646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.462654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.462862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.462869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.463186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.463193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.463504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.463512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.463715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.463722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.464053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.464060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.464247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.464254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.464611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.464619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.464948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.464955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.465267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.465275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.465600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.465608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.465801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.465811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.466007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.466015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.466216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.466225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.466544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.466552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.466864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.466872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.467106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.467114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.467436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.467445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.467768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.467776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.468098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.468106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.468432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.468440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.468804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.468813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.469132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.917 [2024-12-05 12:18:07.469141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.917 qpair failed and we were unable to recover it. 00:34:42.917 [2024-12-05 12:18:07.469481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.469490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.469823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.469830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.470036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.470044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.470399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.470407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.470721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.470730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.471057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.471065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.471387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.471394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.471715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.471723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.472086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.472094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.472292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.472300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.472498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.472506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.472887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.472894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.473209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.473217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.473541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.473549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.473879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.473887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.474211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.474220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.474528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.474536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.474937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.474946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.475256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.475265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.475440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.475449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.475762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.475770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.476151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.476158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.476477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.476485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.476830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.476838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.477030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.477038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.477415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.477423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.477667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.477674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.478001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.478008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.478349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.478358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.478579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.478587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.478824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.478833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.479178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.479186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.479384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.479393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.479575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.479584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.479927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.479936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.480151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.480160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.480480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.480489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.480801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.480808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.481194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.481524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.481532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.481898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.481905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.482209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.482218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.482536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.482543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.482850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.482859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.483173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.483180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.483505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.483513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.483843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.483850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.484174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.484182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.484409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.484418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.484731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.484739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.485059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.485067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.485391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.485398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.485724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.485733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.486045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.486052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.486388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.486396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.486582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.486589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.486939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.486950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.487276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.487284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.487644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.487652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.487854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.487862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.488192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.488512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.488520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.488727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.488734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.489028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.489036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.489374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.489382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.489692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.489701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.489982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.489990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.490290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.490305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.918 [2024-12-05 12:18:07.490535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.918 [2024-12-05 12:18:07.490542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.918 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.490858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.490866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.491079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.491087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.491403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.491410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.491720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.491729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.491823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.491831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.492124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.492132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.492475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.492483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.492772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.492781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.492989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.492998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.493336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.493345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.493537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.493546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.493874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.493881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.494199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.494207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.494532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.494540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.494858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.494868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.495176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.495183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.495492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.495500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.495821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.495829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.496156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.496164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.496489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.496496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.496869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.496877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.497197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.497204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.497396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.497405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.497726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.497733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.497944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.497951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.498375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.498384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.498578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.498587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.498917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.498926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.499151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.499159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.499332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.499340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.499656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.499664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.499990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.499998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.500341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.500348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.500548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.500555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.500773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.500781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.501055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.501062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.501366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.501375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.501789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.501796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.502089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.502097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.502366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.502374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.502705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.502713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.502929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.502937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.503277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.503285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.503592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.503600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.503774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.503783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.504115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.504123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.504447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.504459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.504897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.504905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.505154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.505162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.505491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.505499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.505822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.505829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.506148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.506156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.506469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.506477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.506671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.506679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.507010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.507017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.507411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.507422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.507634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.507643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.507980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.507988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.508242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.508250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.508555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.508563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.508859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.508867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.509089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.509096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.509327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.509335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.509688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.509697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.510044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.510052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.510220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.510228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.510529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.510537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.510842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.510849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.511164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.511172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.511502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.511509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.511921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.511928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.512259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.512267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.512578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.512586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.919 [2024-12-05 12:18:07.512920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.919 [2024-12-05 12:18:07.512927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.919 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.513138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.513145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.513469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.513477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.513805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.513812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.514118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.514127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.514448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.514459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.514754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.514761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.515071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.515078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.515400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.515407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.515772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.515782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.516106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.516114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.516434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.516441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.516763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.516771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.517125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.517134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.517452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.517467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.517694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.517701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.518032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.518040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.518360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.518368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.518712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.518720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.519033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.519041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.519278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.519285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.519602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.519610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.519899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.519906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.520227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.520525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.520532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.520864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.520872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.521198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.521205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.521595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.521604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.521801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.521810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.522142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.522149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.522353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.522360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.522626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.522635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.522968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.522976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.523303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.523311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.523665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.523673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.523984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.523993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.524316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.524324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.524637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.524646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.524852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.524868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.525193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.525201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.525530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.525537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.525854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.525862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.526204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.526212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.526526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.526534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.526863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.526871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.527221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.527230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.527436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.527443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.527764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.527772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.528098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.528105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.528429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.528437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.528775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.528787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.529100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.529107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.529431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.529439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.529764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.529772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.530096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.530103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.530301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.530308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.530358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.530366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.530677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.530685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.531004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.531011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.531341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.531348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.531551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.531558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.531857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.531865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.532185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.532192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.532496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.532504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.532815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.532824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.533142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.533150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.533503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.533512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.533797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.533804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.533986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.920 [2024-12-05 12:18:07.533994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.920 qpair failed and we were unable to recover it. 00:34:42.920 [2024-12-05 12:18:07.534243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.534251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.534576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.534584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.534904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.534912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.535119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.535126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.535488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.535496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.535821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.535828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.535999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.536007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.536385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.536392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.536680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.536688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.537057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.537064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.537385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.537393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.537688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.537696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.538046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.538053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.538377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.538385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.538704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.538712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.539045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.539053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.539256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.539264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.539612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.539620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.539960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.539968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.540146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.540155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.540364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.540372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.540654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.540662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.541087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.541096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.541413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.541421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.541517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.541525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.541811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.541820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.542143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.542152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.542474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.542484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.542787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.542796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.543117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.543125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.543336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.543344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.543635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.543643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.543857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.543866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.544164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.544172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.544553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.544564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.544900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.544907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.545085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.545093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.545468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.545477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.545808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.545818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.546143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.546151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.546474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.546482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.546836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.546845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.547168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.547176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.547507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.547516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.547844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.547852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.548151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.548159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.548466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.548475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.548770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.548777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.549089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.549096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.549416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.549428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.549748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.549757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.550082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.550091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.550448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.550477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.550824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.550833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.551191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.551199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.551520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.551529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.551731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.551740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.552073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.552080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.552384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.552393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.552707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.552714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.553028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.553036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.553236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.553244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.553580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.553588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.553892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.553900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.554069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.554077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.554461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.554469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.554776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.554784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.555106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.555113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.555481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.555490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.555686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.555694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.556025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.556033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.556246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.556254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.556572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.921 [2024-12-05 12:18:07.556579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.921 qpair failed and we were unable to recover it. 00:34:42.921 [2024-12-05 12:18:07.556909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.556916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.557270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.557277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.557573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.557581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.557963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.557971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.558275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.558283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.558627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.558635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.558937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.558946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.559234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.559241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.559576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.559584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.559903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.559910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.560209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.560216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.560533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.560541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.560747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.560755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.561107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.561115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.561464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.561473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.561798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.561805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.562139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.562146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.562475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.562486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.562681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.562689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.562993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.563001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.563275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.563283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.563634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.563642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.563864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.563872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.564237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.564244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.564582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.564590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.564816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.564824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.565123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.565131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.565485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.565492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.565835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.565843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.566178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.566186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.566566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.566576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.566880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.566888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.567209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.567217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.567537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.567545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.567748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.567755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.568031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.568046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.568362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.568370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.568694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.568702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.569037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.569044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.569368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.569385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.569670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.569678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.570102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.570112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.570511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.570520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.570898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.570905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.571085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.571096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.571380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.571391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.571712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.571721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.572035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.572043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.572367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.572375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.572698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.572706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.573028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.573036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.573421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.573429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.573602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.573610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.573968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.573976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.574295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.574304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.574648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.574657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.574988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.574997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.575297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.575305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.575632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.575642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.575968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.575976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.576297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.576304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.576634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.576643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.576966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.576973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.577287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.577296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.577656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.577665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.577970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.577978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.578167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.578176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.578565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.578573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.578907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.578916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.579279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.579288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.579618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.579959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.579968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.580297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.580305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.922 qpair failed and we were unable to recover it. 00:34:42.922 [2024-12-05 12:18:07.580619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.922 [2024-12-05 12:18:07.580628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.580735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.580743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.581018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.581026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.581345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.581354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.581681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.581690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.582023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.582032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.582357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.582365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.582681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.582690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.583017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.583026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.583350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.583358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.583712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.583722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.584039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.584048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.584377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.584389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.584646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.584656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.584999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.585008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.585323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.585333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.585712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.585722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.586041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.586049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.586343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.586351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.586673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.586681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.587027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.587036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.587238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.587247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.587424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.587432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.587677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.587685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.588015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.588025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.588345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.588353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.588639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.588648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.588981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.588990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.589310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.589319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.589622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.589632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.589843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.589852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.590072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.590081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.590402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.590412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.590710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.590720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.591039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.591048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.591279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.591288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.591581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.591589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.591912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.591920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.592241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.592248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.592578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.592588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.592825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.592832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.593039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.593047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.593234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.593243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.593592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.593600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.593935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.593943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.594118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.594128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.594471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.594481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.594859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.594868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.595193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.595200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.595518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.595526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.595853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.595860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.596169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.596177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.596506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.596514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.596887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.596894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.597220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.597228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.597608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.597616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.597919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.597927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.598243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.598251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.923 [2024-12-05 12:18:07.598461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.923 [2024-12-05 12:18:07.598470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.923 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.598750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.598759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.599078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.599088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.599411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.599419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.599716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.599724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.600001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.600009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.600344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.600352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.600637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.600645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.600979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.600987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.601344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.601353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.601690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.601699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.602020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.602030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.602349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.602358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.602678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.602687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.602933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.602942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.603253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.603263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.603497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.603507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.603800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.603808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.604150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.604160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.604379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.604388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.604596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.604606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.604989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.604999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.605290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.605301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.605513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.605523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.605838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.605845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.606167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.606175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.606504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.606514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.606849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.606857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.607179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.607187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.607591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.607603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.607955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.607963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.608297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.608305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.608631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.608641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.608948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.608959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.609153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.609160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.609498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.609507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.609906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.609916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.610222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.610232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.610525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.610533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.610884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.610892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.611259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.611267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.611576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.611586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.611922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.611930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.612136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.612144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.612369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.612379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.612707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.612715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.613068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.613077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.613397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.613406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.924 qpair failed and we were unable to recover it. 00:34:42.924 [2024-12-05 12:18:07.613632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.924 [2024-12-05 12:18:07.613641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.613977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.613990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.614314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.614323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.614641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.614650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.614821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.614829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.615158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.615490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.615498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.615757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.615767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.616104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.616112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.616416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.616425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.616736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.616746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.617133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.617144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.617366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.617375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.617704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.617713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.617935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.617944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.618282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.618290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.618625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.618634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.618971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.618981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.619301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.619309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.619637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.619647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.619968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.619976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.620364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.620372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.620666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.620675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.621001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.621010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.621315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.621325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.621686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.621696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.622099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.622108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.622468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.622476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.622878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.622888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.623234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.623243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.623529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.623538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.623873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.623881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.624203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.624213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.624605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.624616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.624932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.624942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.625262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.625269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.625587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.625596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.625803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.625811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.626161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.626170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.626518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.626529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.626846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.626856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.627172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.627182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.627547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.627558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.627917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.627925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.628148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.628155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.628592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.628603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.628794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.628802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.629117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.629125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.925 qpair failed and we were unable to recover it. 00:34:42.925 [2024-12-05 12:18:07.629468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.925 [2024-12-05 12:18:07.629479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.629681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.629688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.629920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.629930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.630273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.630284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.630608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.630618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.630960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.630971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.631189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.631199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.631610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.631621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.631940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.631992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.632311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.632321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.632630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.632639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.632968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.632978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.633184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.633193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.633541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.633550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.633903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.633911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.634237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.634246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.634565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.634574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.634744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.634754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.635078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.635086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.635415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.635423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.635745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.635754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.636102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.636110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.636431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.636439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.636684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.636695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.637022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.637030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.637340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.637350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.637632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.637643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.637971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.637979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.638294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.638302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.638677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.638687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.639019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.639028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.639325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.639335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.639675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.639684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.640030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.640041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.640386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.640395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.640635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.640966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.640977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.641285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.641296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.926 [2024-12-05 12:18:07.641617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.926 [2024-12-05 12:18:07.641627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.926 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.642017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.642026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.642352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.642361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.642715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.642723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.642921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.642931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.643289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.643296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.643451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.643467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.643647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.643655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.643993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.644001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.644324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.644332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.644749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.644759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.645013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.645021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.645304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.645313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.645629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.645638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.646036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.646045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.646356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.646366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.646593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.646601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.646955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.646963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.647297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.647305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.647630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.647638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.647964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.647973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.648225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.648233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.648650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.648658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.648857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.648865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.649197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.649210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.649525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.649533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.649858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.649866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.650171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.650179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.650503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.650514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.650803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.650812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.651140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.651152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.651493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.651502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.651819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.651828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.652146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.652154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.652323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.652333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.652678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.652688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.652842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.652850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.653133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.653142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.653433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.653440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.653771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.653779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.653989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.653999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.654331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.654338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.654772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.654780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.655082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.655090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.655462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.927 [2024-12-05 12:18:07.655471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.927 qpair failed and we were unable to recover it. 00:34:42.927 [2024-12-05 12:18:07.655687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.655695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.656042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.656050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.656279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.656286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.656605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.656613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.656961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.656968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.657296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.657304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.657616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.657624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.657996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.658003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.658315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.658323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.658741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.658750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.659083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.659090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.659393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.659407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.659591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.659599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.659871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.659878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.660201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.660210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.660446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.660467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.660752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.660760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.661081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.661089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.661281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.661289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.661662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.661669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.662008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.662017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.662343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.662351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.662652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.662661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.662983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.662990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.663316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.663324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.663723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.663731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.663930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.663938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.664276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.664283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.664607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.664615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.664856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.664863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.665175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.665183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.665504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.665512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.665840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.665847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.666178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.666185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.666490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.666498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.666831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.666839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.667160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.667168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.667491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.667498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.667823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.667832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.668193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.668201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.668516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.668524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.668887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.668894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.669244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.669251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.669462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.669470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.669750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.928 [2024-12-05 12:18:07.669757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.928 qpair failed and we were unable to recover it. 00:34:42.928 [2024-12-05 12:18:07.670095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.670104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.670448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.670462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.670784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.670794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.671114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.671121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.671441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.671449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.671790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.671798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.672117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.672124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.672446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.672459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.672727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.672735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.673060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.673068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.673400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.673728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.673737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.674061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.674070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.674372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.674381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.674699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.674709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.675033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.675042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.675364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.675374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.675647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.675656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.675987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.675996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.676315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.676324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.676666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.676673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.676980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.676988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.677342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.677350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.677635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.677644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.677964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.677974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.678364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.678374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.678683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.678692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.679022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.679030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.679233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.679240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.679443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.679451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.679675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.679683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.680105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.680115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.680281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.680292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.680625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.680633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.680913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.680921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.681205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.681213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.681588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.681597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.681931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.681938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.682258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.682266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.682621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.682631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.682826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.682835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.683214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.683221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.683570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.683578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.683946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.683958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.684252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.684261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.684653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.684661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.684992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.685000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.685398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.929 [2024-12-05 12:18:07.685406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.929 qpair failed and we were unable to recover it. 00:34:42.929 [2024-12-05 12:18:07.685705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.685713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.686040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.686047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.686333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.686340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.686669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.686677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.686999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.687006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.687320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.687328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.687635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.687643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.687817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.687826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.688210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.688219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.688532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.688541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.688868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.688875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.689186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.689195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.689516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.689526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.689861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.689871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.690195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.690202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.690512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.690520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.690856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.690864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.691168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.691178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.691497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.691506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.691718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.691726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.692130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.692138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.692463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.692471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.692791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.692803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.693126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.693134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.693463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.693471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.693789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.693797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.694126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.694138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.694475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.694484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.694676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.694685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.695006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.695014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.695398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.695405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.695732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.695740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.696056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.696065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.696267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.696276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.696629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.696638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.696960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.696968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.697289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.697298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.697645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.697653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.697957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.697965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.698289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.698298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.698612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.698621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.698954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.698962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.699278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.699491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.930 [2024-12-05 12:18:07.699500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.930 qpair failed and we were unable to recover it. 00:34:42.930 [2024-12-05 12:18:07.699714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.699721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.700029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.700036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.700268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.700275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.700670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.700679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.700997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.701005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.701277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.701286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.701624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.701632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.701947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.701955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.702236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.702244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.702586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.702594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.702895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.702903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.703198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.703206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.703553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.703561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.703889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.703897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.704097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.704106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.704425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.704433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.704729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.704738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.705055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.705063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.705261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.705270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.705551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.705562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.705843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.705850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.706003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.706012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.706232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.706241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.706577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.706587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.706921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.706930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.707175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.707183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.707519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.707527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.707713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.707721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.708051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.708059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.708278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.708286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.708581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.708589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.708773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.708781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.709158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.709167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.709475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.709484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.709569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.709577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.709905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.709912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.710191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.710199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.710534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.710542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.710893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.710903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.711076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.711084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.711312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.711319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.711671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.711678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.711970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.711977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.712291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.712298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.712466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.712473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.712758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.712766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.713116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.713128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.713446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.931 [2024-12-05 12:18:07.713464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.931 qpair failed and we were unable to recover it. 00:34:42.931 [2024-12-05 12:18:07.713760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.713767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.714045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.714053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.714373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.714381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.714718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.714726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.715021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.715030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.715351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.715360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.715668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.715678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.715995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.716002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.716340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.716348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.716642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.716650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.716980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.716990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.717177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.717185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.717402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.717409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.717702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.717710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.718028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.718037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.718234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.718243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.718540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.718548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.718877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.718885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.719200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.719208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.719521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.719529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.719633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.719642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.719991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.719999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.720312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.720320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.720633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.720640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.720971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.720979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.721265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.721273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.721581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.721590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.721934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.721942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.722105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.722112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.722470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.722477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.722676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.722684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.722995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.723004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.723200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.723210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.723529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.723536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.723748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.723755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.724058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.724066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.724416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.724423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.724713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.724721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.724789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.724796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.725098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.725107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.932 [2024-12-05 12:18:07.725425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.932 [2024-12-05 12:18:07.725433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.932 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.725758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.725766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.726071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.726079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.726290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.726299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.726583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.726591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.726841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.726849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.727202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.727210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.727475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.727484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.727908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.727915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.728229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.728236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.728580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.728588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.728917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.728925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.729244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.729258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.729581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.729588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.729914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.729923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.730245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.730252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.730582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.730590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.730908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.730915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.731117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.731125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.731431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.731438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.731652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.731660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.732000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.732009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.732332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.732339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.732624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.732632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.732941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.732949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.733178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.733185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.733518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.733528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.733744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.733751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.734087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.734096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.734421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.734428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.734760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.734768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.735129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.735136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.735359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.735367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.735665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.735672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.735987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.735995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.736340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.736349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.736755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.736762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.737118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.737126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.737205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.737213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.737527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.737535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.737886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.737894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.738182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.738189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.738515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.738523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.738864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.738872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.739202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.739209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.739459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.739467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.739793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.739801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.740078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.740086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.740403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.740411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.933 [2024-12-05 12:18:07.740723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.933 [2024-12-05 12:18:07.740731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.933 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.741009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.741017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.741348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.741355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.741729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.741738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.742049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.742056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.742291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.742298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.742627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.742636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.742971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.742979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.743338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.743345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.743670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.743677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.743880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.743887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.744205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.744212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.744407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.744414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.744703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.744711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.745033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.745040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.745238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.745246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.745549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.745557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.745898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.746090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.746101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.746430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.746439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.746762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.746770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.747071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.747078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.747411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.747420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.747655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.747663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.748019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.748027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.748349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.748358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.748560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.748570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.748911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.748920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.749210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.749221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.749421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.749431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.749623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.749632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.749943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.749956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.750176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.750186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.750505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.750514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.750799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.750807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.751098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.751106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.751467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.751476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.751788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.751795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.752078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.752086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.752301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.752310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.752594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.752603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.934 [2024-12-05 12:18:07.752879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.934 [2024-12-05 12:18:07.752888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.934 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.753209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.753218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.753557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.753565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.753790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.753799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.754116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.754123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.754447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.754475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.754816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.754823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.755124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.755132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.755447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.755459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.755656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.755663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.755942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.755949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.756144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.756153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.756348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.756358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.756652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.756661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.756916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.756925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.757267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.757275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.757460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.757468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.757811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.757819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.758153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.758161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.758475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.758485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.758824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.758831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.759036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.759044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.759239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.759247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.759575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.759583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.759782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.759790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.760111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.760118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.760450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.760465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.760666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.760674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.761001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.761009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.761359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.761367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.761700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.761709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.762024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.762032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.762226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.762234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.762534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.762542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.762832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.762840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.763157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.763165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.763497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.763505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.763706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.763713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.763997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.764004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.764347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.764356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.764726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.764736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.765016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.765025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.765350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.765358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.765682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.765691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.765980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.765988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.766305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.766315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.766625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.766633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.766942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.766950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.767280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.767287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.767597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.767606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.935 [2024-12-05 12:18:07.767882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.935 [2024-12-05 12:18:07.767889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.935 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.768216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.768225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.768437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.768446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.768776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.768785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.769071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.769079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.769410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.769419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.769760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.769768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.770090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.770098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.770451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.770466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.770684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.770691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.771005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.771013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.771342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.771351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.771688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.771696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.771886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.771894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.772219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.772227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.772522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.772530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.772872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.772879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.773224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.773234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.773523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.773531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.773864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.773872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.774204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.774212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.774530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.774538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.774854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.774863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.775059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.775069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.775422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.775430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.775695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.775703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.775993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.776001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.776303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.776311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.776606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.776614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.776957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.776964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.777284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.777292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.777609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.777617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.777945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.777953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.778270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.778288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.778630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.778638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.778982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.778990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.779357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.779368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.779690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.779699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.936 [2024-12-05 12:18:07.779959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.936 [2024-12-05 12:18:07.779967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.936 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.780290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.780299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.780641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.780651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.780996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.781322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.781330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.781678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.781686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.781876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.781884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.782206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.782214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.782551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.782559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.782888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.782896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.783205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.783213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.783556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.783564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.783887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.783895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.784253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.784261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.784433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.784441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.784842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.784850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.785132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.785141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.785465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.785474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.785769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.785777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.786077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.786086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.786307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.786316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.786533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.786543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.786821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.786829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.787185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.787194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.787528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.787536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.787860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.787871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.788192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.788200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.788525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.788534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.788850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.788858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.789081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.789089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.789378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.789386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.789715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.789723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.789980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.789988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.790312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.790321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.790624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.790632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.790951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.790969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.791374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.791382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.791604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.791612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.791906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.791913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.937 [2024-12-05 12:18:07.792231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.937 [2024-12-05 12:18:07.792239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.937 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.792524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.792532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.792860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.792868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.793202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.793210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.793410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.793417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.793734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.793743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.794059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.794068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.794403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.794411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.794751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.794758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.794950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.794958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.795319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.795327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.795676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.795684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.796012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.796020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.796340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.796349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.796589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.796598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.796942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.796951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.797294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.797303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.797620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.797628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.797954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.797962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.798309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.798316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.798632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.798640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.798970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.798977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.799141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.799149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.799495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.799504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.799849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.799856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.800065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.800074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.800413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.800420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.800741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.800752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.801052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.801059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.801383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.801391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.801707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.801715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.802000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.802008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.802350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.802357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.802563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.802571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.802939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.802947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.803264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.938 [2024-12-05 12:18:07.803272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.938 qpair failed and we were unable to recover it. 00:34:42.938 [2024-12-05 12:18:07.803597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.803605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.803796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.803803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.804085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.804093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.804405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.804412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.804722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.804730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.805067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.805074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.805280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.805287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.805599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.805606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.805941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.805949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.806145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.806154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.806472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.806481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.806782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.806789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.807093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.807101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.807431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.807439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.807760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.807776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.808111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.808119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.808428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.808435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.808773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.808780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.808983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.808993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.809267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.809275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.809567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.809575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.809898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.809906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.810224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.810231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.810313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.810320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.810537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.810545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.810790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.810805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.811123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.811131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.811464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.811472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.811666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.811675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.811994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.812001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.812328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.812336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.812635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.812642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.812983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.812991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.813336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.813343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.813643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.813651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.813973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.813980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.814292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.814300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.814610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.814618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.814954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.814961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.815305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.815312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.815600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.815608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.815977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.815984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.816323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.816331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.816637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.816646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.816949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.816958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.817258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.817267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.817581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.817589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.817791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.817798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.818081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.818088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.818438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.818447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.818673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.818680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.818865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.818873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.819220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.819227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.819435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.819443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.819826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.819835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.820173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.939 [2024-12-05 12:18:07.820181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.939 qpair failed and we were unable to recover it. 00:34:42.939 [2024-12-05 12:18:07.820395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.820402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.820750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.820759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.821081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.821089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.821414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.821423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.821714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.821724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.822067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.822074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.822395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.822403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.822689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.822698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.823019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.823027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.823317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.823326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.823624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.823632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.823958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.823966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.824294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.824303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.824481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.824492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.824884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.824892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.825217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.825225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.825573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.825581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.825761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.825770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.826089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.826098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.826424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.826432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.826768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.826777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.827151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.827160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.827490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.827499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.827807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.827816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.828149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.828157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.828388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.828397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.828718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.828726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.829045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.829052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.829386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.829394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.829726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.829735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.830065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.830075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.830247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.830255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.830581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.830591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.830934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.830943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.831272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.831280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.831491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.831501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.831830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.831838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.832139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.832147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.832467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.832475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.832808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.832817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.833140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.833149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.833452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.833467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.833667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.833676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.833999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.834332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.834340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.834632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.834640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.834948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.834955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.835279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.835287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.835600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.835609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.835951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.835960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.836198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.836207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.836521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.836529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.836837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.836845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.837142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.837150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.837474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.837482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.837811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.837818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.838146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.838155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.940 [2024-12-05 12:18:07.838466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.940 [2024-12-05 12:18:07.838475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.940 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.838822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.838831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.839155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.839164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.839373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.839380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.839707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.839715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.840038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.840046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.840326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.840335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.840678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.840686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.840847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.840857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.841189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.841197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.841478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.841486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.841830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.841839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.842147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.842155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.842432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.842440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.842757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.842769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.843185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.843194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.843547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.843555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.843893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.843903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.844250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.844258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.844570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.844578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.844912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.844920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.845250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.845259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.845576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.845887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.845895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.846227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.846236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.846556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.846565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.846893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.846902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.847260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.847268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.847578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.847586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.847957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.847967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.848298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.848307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.848653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.848662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.849003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.849010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.849212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.849221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.849592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.849602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.849910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.849918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.850229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.850237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.850570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.850579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.850890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.850898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.851202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.851209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.851530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.851538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.851822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.851830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.852136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.852144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.852480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.852490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.852750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.852757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.853079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.853088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.853423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.853430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.853747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.853756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.854072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.854079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.854291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.854300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.854616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.854625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.854938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.854946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.855259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.855269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.941 [2024-12-05 12:18:07.855600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.941 [2024-12-05 12:18:07.855608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.941 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.855924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.855932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.856140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.856150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.856451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.856480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.856694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.856702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.857021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.857030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.857358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.857367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.857693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.857701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.857931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.857938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.858249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.858258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.858578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.858586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.858911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.858920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.859238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.859247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.859588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.859597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.859921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.859931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.860250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.860260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.860580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.860810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.860819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.861156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.861164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.861472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.861482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.861678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.861688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.862010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.862018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.862324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.862333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.862623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.862631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.862922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.862930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.863272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.863281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.863604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.863615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.863834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.863843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.864059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.864068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.864396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.864408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.864731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.864740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.865105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.865115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.865451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.865466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.865823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.865834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.866159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.866167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.866412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.866421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.866768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.866778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.867102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.867111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.867325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.867335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.867631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.867642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.867963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.867971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.868247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.868255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.868563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.868573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.868880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.868888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.869067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.869078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.869393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.869401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.942 [2024-12-05 12:18:07.869699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.942 [2024-12-05 12:18:07.869707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.942 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.870025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.870033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.870366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.870376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.870696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.870706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.871010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.871018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.871336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.871343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.871751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.871760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.872074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.872083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.872401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.872411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.872606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.872614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.872804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.872812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.873122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.873130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.873408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.873418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.873773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.873782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.874075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.874084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.874313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.874322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.874631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.874642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.874961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.874971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.875292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.875303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.875647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.875656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.876016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.876024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.876328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.876337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.876574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.876582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.876862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.876870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.877195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.877203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.877508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.877516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.877794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.877802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.878123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.878130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.878523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.878533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.878824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.878832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.879023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.879031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.879354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.879361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.879659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.880013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.880021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.880329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.880337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.880671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.880678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.880962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.880970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.881314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.881323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.881718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.943 [2024-12-05 12:18:07.881726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.943 qpair failed and we were unable to recover it. 00:34:42.943 [2024-12-05 12:18:07.882070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.882078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.882404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.882411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.882738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.882748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.882976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.882986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.883298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.883307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.883609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.883617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.883970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.883978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.884318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.884326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.884540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.884548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.884891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.884899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.885245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.885253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.885571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.885581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.885823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.885832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.886154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.886169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.886509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.886516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.886812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.886820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.887002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.887011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.887195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.887203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.887530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.887538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.887605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.887612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.887888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.887896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.888226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.888233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.888432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.888440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.888637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.888645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.888973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.888980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.889179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.889187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.889551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.889558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.889895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.889903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.890177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.890184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.890486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.890494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.890662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.890670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.891013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.891021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.891233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.891240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.891424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.891432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.891638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.891645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.891847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.891854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.892137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.892144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.892462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.892471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.944 [2024-12-05 12:18:07.892788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.944 [2024-12-05 12:18:07.892795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.944 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.892990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.892997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.893305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.893313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.893620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.893628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.893955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.893962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.894284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.894291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.894615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.894623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.894956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.894963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.895277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.895284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.895593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.895601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.895933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.895940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.896254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.896261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.896582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.896590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.896882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.896890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.897095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.897104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.897423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.897432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.897703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.897711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.898039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.898046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.898377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.898384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.898701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.898708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.899039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.899046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.899398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.899405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.899739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.900055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.900063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.900385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.900394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.900739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.900749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.901077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.901086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.901411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.901418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.901827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.901834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.902185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.902193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.902534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.902542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.902867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.902875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.903095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.903102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.903427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.945 [2024-12-05 12:18:07.903434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.945 qpair failed and we were unable to recover it. 00:34:42.945 [2024-12-05 12:18:07.903756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.903764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.904086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.904094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.904477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.904485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.904836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.904843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.905045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.905052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.905371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.905378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.905663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.905670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.905990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.905998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.906335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.906344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.906520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.906530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.906861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.906868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.907226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.907234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.907520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.907528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.907753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.907760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.908097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.908104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.908325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.908333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.908651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.908659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.909006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.909014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.909347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.909355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.909662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.909670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.909870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.909879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.910206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.910214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.910414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.910421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.910776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.910784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.911125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.911134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.911412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.911420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.911734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.911742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.911948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.911956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.912156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.912165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.912495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.912503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.912792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.912800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.913089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.913097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.913421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.913428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.913818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.913826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.914159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.914167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.914386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.914393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.914730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.914738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.914940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.946 [2024-12-05 12:18:07.914947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.946 qpair failed and we were unable to recover it. 00:34:42.946 [2024-12-05 12:18:07.915342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.915350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.915533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.915540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.915900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.915907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.916240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.916248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.916471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.916479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.916753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.916761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.917032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.917040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.917315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.917322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.917661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.917669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.917750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.917758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.918040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.918048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.918356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.918366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.918692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.918700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.918985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.918993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.919290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.919299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.919373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.919380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.919513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.919521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.919802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.919809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.920033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.920040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.920286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.920293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.920574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.920582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.920912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.920920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.921260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.921268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.921590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.921599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.921924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.921933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.922158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.922167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.922494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.922501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.922798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.922806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.923168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.923176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.923487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.923496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.923815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.923822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.924130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.924137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.924466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.924474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.924873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.924882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.925190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.925198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.925504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.925512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.925814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.925822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.926024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.926032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.926400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.947 [2024-12-05 12:18:07.926410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.947 qpair failed and we were unable to recover it. 00:34:42.947 [2024-12-05 12:18:07.926611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.926619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.926806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.926813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.927140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.927148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.927473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.927482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.927826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.927834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.928158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.928166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.928486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.928494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.928815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.928823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.929039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.929046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.929354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.929363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.929689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.929696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.930023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.930031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.930383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.930390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.930705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.930714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.931061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.931068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.931280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.931287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.931503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.931511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:42.948 [2024-12-05 12:18:07.931881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:42.948 [2024-12-05 12:18:07.931889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:42.948 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.932204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.932214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.932536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.932546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.932863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.932874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.933175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.933182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.933513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.933522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.933783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.933790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.933997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.934005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.934332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.934339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.934625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.934633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.934971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.934978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.935282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.935289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.935365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.935373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.935677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.935685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.935965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.935973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.936171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.936178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.936479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.936488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.936815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.936822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.937146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.937155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.937480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.937488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.937804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.937812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.938126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.938134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.938465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.938473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.938765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.938774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.938965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.938972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.939240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.939247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.226 [2024-12-05 12:18:07.939525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.226 [2024-12-05 12:18:07.939532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.226 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.939850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.939858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.940171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.940178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.940502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.940510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.940825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.940832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.941159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.941167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.941512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.941520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.941843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.941852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.942172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.942181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.942500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.942509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.942832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.942839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.943048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.943056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.943395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.943403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.943732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.943740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.944086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.944093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.944403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.944411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.944649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.944657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.944966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.944973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.945308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.945315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.945625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.945633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.945979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.945986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.946269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.946276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.946611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.946618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.946938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.946945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.947276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.947285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.947600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.947609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.947918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.947927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.948118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.948127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.948470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.948479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.948774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.948781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.949092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.949100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.949411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.949419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.949616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.949624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.949979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.949986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.950332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.950341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.950711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.950719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.951031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.951039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.951228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.227 [2024-12-05 12:18:07.951236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.227 qpair failed and we were unable to recover it. 00:34:43.227 [2024-12-05 12:18:07.951550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.951558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.951894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.951901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.952219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.952227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.952545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.952553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.952887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.952894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.953216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.953223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.953534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.953543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.953878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.953886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.954226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.954235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.954548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.954556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.954752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.954760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.955082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.955090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.955394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.955402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.955606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.955614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.955810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.955818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.956122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.956129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.956439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.956447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.956779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.956788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.957094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.957102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.957426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.957435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.957764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.957771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.958129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.958136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.958488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.958497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.958826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.958834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.959057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.959065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.959389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.959396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.959705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.959713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.960038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.960048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.960375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.960384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.960707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.960714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.961032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.961040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.961367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.961375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.961674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.961682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.962000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.962008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.962286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.962294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.962632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.962639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.962938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.962946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.963273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.228 [2024-12-05 12:18:07.963280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.228 qpair failed and we were unable to recover it. 00:34:43.228 [2024-12-05 12:18:07.963599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.963607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.963813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.963822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.964002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.964009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.964341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.964349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.964683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.964691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.964910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.964918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.965128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.965146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.965466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.965474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.965791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.965799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.966135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.966143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.966468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.966476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.966658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.966666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.966999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.967008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.967328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.967337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.967530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.967538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.967738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.967744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.968067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.968075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.968404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.968413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.968754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.968763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.969154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.969162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.969487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.969495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.969819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.969827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.970151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.970159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.970486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.970493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.970773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.970781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.971112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.971119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.971299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.971307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.971635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.971643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.971811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.971818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.972139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.972147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.972459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.972467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.972788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.972796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.973109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.973116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.973437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.973445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.973781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.973789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.974108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.974115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.974430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.974438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.974661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.974669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.229 qpair failed and we were unable to recover it. 00:34:43.229 [2024-12-05 12:18:07.975016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.229 [2024-12-05 12:18:07.975024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.975346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.975354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.975699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.975708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.976025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.976034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.976337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.976346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.976724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.976732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.977060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.977069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.977392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.977401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.977572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.977580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.977903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.977911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.978274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.978283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.978599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.978607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.978961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.978969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.979290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.979298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.979487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.979495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.979680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.979689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.979897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.979905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.980235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.980242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.980571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.980579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.980897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.980906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.981127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.981134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.981468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.981476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.981765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.981773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.982107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.982114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.982435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.982450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.982635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.982644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.982745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.982751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.983110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.983118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.983440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.983448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.983771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.983780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.984105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.984112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.984438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.984445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.984692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.984700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.230 [2024-12-05 12:18:07.985030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.230 [2024-12-05 12:18:07.985038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.230 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.985353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.985360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.985623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.985631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.985839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.985847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.986032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.986039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.986247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.986254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.986611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.986620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.986832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.986839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.987197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.987205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.987523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.987530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.987749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.987756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.988082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.988090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.988446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.988458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.988653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.988660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.989012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.989019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.989317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.989325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.989636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.989644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.989985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.989993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.990315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.990322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.990610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.990618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.990930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.990937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.991250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.991258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.991393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.991401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.991617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.991627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.991948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.991956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.992287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.992295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.992635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.992643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.992971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.992980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.993272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.993279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.993593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.993602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.993787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.993804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.994126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.994134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.994449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.994464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.994751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.994758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.995051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.995059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.995379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.995386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.995719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.995727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.996048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.996056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.996367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.231 [2024-12-05 12:18:07.996375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.231 qpair failed and we were unable to recover it. 00:34:43.231 [2024-12-05 12:18:07.996725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.996733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.996902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.996909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.997255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.997264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.997477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.997486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.997883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.997890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.998086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.998093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.998385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.998393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.998673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.998681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.999024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.999032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.999353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.999360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.999529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.999538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:07.999828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:07.999835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.000156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.000164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.000481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.000489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.000810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.000818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.001154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.001163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.001478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.001486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.001817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.001824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.002024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.002031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.002323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.002330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.002621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.002629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.002939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.002947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.003256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.003265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.003469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.003478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.003562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.003571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.003855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.003864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.004232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.004241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.004561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.004568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.004794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.004801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.005077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.005085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.005378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.005386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.005626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.005635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.005925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.005933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.006228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.006236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.006578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.006586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.006837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.006845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.007110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.007117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.007405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.007413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.232 [2024-12-05 12:18:08.007614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.232 [2024-12-05 12:18:08.007622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.232 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.007953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.007960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.008240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.008247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.008605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.008613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.008964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.008971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.009278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.009287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.009605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.009613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.009955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.009963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.010284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.010292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.010506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.010514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.010837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.010844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.011071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.011079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.011406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.011413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.011682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.011690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.011886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.011894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.012231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.012238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.012558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.012566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.012895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.012903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.013214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.013241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.013581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.013589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.013941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.013948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.014273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.014280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.014595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.014604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.014792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.014801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.015115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.015122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.015434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.015442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.015775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.015783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.015998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.016005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.016361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.016368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.016684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.016692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.016997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.017323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.017331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.017635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.017643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.017815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.017823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.018011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.018019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.018341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.018348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.018676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.018685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.019010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.233 [2024-12-05 12:18:08.019018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.233 qpair failed and we were unable to recover it. 00:34:43.233 [2024-12-05 12:18:08.019220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.019227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.019511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.019520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.019825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.019832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.020126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.020133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.020493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.020502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.020813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.020821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.021143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.021152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.021492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.021501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.021730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.021738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.022052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.022060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.022372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.022380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.022708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.022994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.023001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.023356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.023363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.023682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.023690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.024022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.024351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.024359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.024716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.024724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.024927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.024935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.025207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.025216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.025528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.025535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.025858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.025866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.026175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.026182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.026378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.026386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.026779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.026787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.027106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.027114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.027430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.027437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.027758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.027766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.028094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.028103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.028321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.028330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.028596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.028605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.028931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.028940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.029125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.029132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.029430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.029438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.029628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.029636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.029878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.029885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.030220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.030228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.030553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.030561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.234 [2024-12-05 12:18:08.030876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.234 [2024-12-05 12:18:08.030884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.234 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.031198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.031206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.031513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.031521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.031857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.031865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.032181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.032189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.032341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.032349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.032549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.032557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.032855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.032863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.033178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.033185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.033383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.033391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.033654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.033664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.034005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.034012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.034318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.034326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.034633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.034640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.034825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.034833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.035219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.035226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.035528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.035536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.035706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.035714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.036034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.036041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.036260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.036267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.036446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.036458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.036792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.036799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.037124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.037132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.037460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.037470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.037873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.037880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.038192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.038199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.038527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.038536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.038870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.038878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.039183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.039192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.039498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.039505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.039828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.039836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.040149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.040156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.040463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.040471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.040820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.235 [2024-12-05 12:18:08.040827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.235 qpair failed and we were unable to recover it. 00:34:43.235 [2024-12-05 12:18:08.041027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.041035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.041299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.041307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.041632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.041640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.041849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.041858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.042219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.042226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.042548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.042556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.042859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.042867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.043170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.043178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.043494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.043502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.043845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.043854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.044161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.044169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.044494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.044503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.044838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.044846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.045160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.045168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.045387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.045394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.045714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.045722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.046077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.046086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.046424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.046433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.046661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.046670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.046996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.047005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.047319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.047328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.047635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.047642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.047918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.047926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.048246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.048253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.048449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.048460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.048742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.048750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.049060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.049067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.049392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.049399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.049694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.049702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.050030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.050037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.050355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.050363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.050521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.050528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.050836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.050845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.051166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.051174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.051517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.051526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.051845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.051852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.052183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.052190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.052507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.236 [2024-12-05 12:18:08.052515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.236 qpair failed and we were unable to recover it. 00:34:43.236 [2024-12-05 12:18:08.052697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.052704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.053044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.053052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.053272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.053279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.053587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.053595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.053937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.053945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.054225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.054232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.054560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.054570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.054871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.054879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.055205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.055212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.055519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.055527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.055745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.055754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.056081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.056088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.056450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.056472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.056786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.056794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.057119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.057126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.057449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.057460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.057764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.057771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.057979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.057986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.058264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.058272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.058503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.058511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.058851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.058858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.059172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.059179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.059476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.059485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.059841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.059848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.060174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.060182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.060505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.060513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.060830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.060838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.061168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.061175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.061494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.061502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.061797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.061804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.062116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.062123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.062407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.062415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.062575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.062583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.062921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.062929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.063231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.063239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.063462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.063471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.063664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.063672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.063995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.237 [2024-12-05 12:18:08.064002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.237 qpair failed and we were unable to recover it. 00:34:43.237 [2024-12-05 12:18:08.064313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.064321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.064562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.064569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.064870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.064877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.065205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.065213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.065404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.065412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.065700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.065708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.065929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.065937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.066228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.066237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.066525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.066532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.066622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.066629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.066979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.066987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.067300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.067307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.067642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.067651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.067968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.067975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.068294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.068302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.068373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.068380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.068672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.068680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.069016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.069024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.069339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.069348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.069634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.069643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.069966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.069975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.070299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.070308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.070629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.070638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.070926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.070934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.071256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.071263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.071591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.071599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.071911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.071920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.072128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.072137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.072493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.072501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.072688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.072696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.073020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.073030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.073356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.073365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.073661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.073669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.074001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.074202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.074210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.074531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.074540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.074884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.074894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.075206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.238 [2024-12-05 12:18:08.075214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.238 qpair failed and we were unable to recover it. 00:34:43.238 [2024-12-05 12:18:08.075479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.075489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.075796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.075804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.076017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.076024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.076304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.076314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.076637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.076647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.076967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.076975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.077167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.077175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.077444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.077452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.077766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.077775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.078035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.078043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.078376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.078384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.078699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.078707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.078980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.078988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.079319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.079327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.079635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.079644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.079972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.079980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.080171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.080180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.080487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.080497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.080827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.080836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.081148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.081157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.081358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.081367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.081684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.081693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.081997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.082006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.082200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.082208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.082532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.082540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.082772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.082780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.083094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.083101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.083324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.083332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.083699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.083707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.083869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.083876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.084182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.084189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.084505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.084513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.084680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.084688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.085046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.085054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.085408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.085416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.085747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.085755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.085928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.085937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.086290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.239 [2024-12-05 12:18:08.086298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.239 qpair failed and we were unable to recover it. 00:34:43.239 [2024-12-05 12:18:08.086648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.086657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.086938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.086946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.087271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.087280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.087467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.087475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.087801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.087809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.088008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.088016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.088214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.088223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.088554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.088562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.088887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.088895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.089225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.089233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.089565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.089575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.089822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.089830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.090151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.090160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.090440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.090447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.090764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.090773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.090962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.090972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.091313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.091321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.091623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.091632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.091942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.091951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.092261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.092269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.092480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.092488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.092767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.092775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.092958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.092966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.093292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.093299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.093422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.093429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.093721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.093731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.094056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.094064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.094374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.094384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.094602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.094614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.094949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.094957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.095272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.095280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.095600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.095610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.095948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.095957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.096303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.240 [2024-12-05 12:18:08.096312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.240 qpair failed and we were unable to recover it. 00:34:43.240 [2024-12-05 12:18:08.096623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.096631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.096819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.096827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.097145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.097154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.097487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.097495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.097823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.097835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.098123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.098132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.098333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.098343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.098649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.098659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.098970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.098978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.099347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.099354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.099692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.099703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.100000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.100010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.100332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.100341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.100687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.100696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.100901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.100911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.101248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.101255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.101436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.101444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.101766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.101776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.102119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.102127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.103616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.103646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.103890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.103903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.104239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.104250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.104555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.104563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.104889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.104898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.105223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.105231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.105465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.105473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.105820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.105829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.106133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.106141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.106341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.106350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.106675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.106686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.106999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.107008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.107225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.107235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.107543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.107554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.107886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.107894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.108212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.108220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.108433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.108443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.108738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.108747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.109086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.109094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.109283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.241 [2024-12-05 12:18:08.109291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.241 qpair failed and we were unable to recover it. 00:34:43.241 [2024-12-05 12:18:08.109596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.109605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.109973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.110281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.110289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.110523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.110531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.110888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.110897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.111216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.111224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.111445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.111462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.111825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.111832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.112149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.112160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.112473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.112483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.112820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.112829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.113177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.113184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.113519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.113528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.113822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.113832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.114107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.114115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.114451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.114478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.114866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.115206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.115214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.115560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.115571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.115902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.115910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.116113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.116121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.116325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.116627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.116635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.116967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.116979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.117329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.117339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.117675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.117683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.117996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.118004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.118308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.118317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.118668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.118677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.118952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.118961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.119288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.119297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.119619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.119629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.119938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.119947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.120237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.120247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.120603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.120615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.120936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.120948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.121245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.121255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.121587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.242 [2024-12-05 12:18:08.121599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.242 qpair failed and we were unable to recover it. 00:34:43.242 [2024-12-05 12:18:08.121926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.121938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.122276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.122286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.122659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.122667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.122998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.123006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.123333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.123342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.123679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.123687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.124010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.124019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.124224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.124234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.124550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.124561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.124733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.124743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.124922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.124932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.125265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.125275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.125590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.125599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.125937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.125945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.126264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.126272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.126628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.126638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.126960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.126968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.127186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.127198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.127512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.127522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.127855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.127865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.128180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.128190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.128535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.128544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.128758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.128775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.129085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.129095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.129409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.129418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.129726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.129736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.130047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.130058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.130388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.130397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.130709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.130720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.130997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.131005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.131208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.131215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.131533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.131542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.131882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.131891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.132097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.132105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.132443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.132461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.133184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.133216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.133522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.133533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.133822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.243 [2024-12-05 12:18:08.133831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.243 qpair failed and we were unable to recover it. 00:34:43.243 [2024-12-05 12:18:08.134142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.134150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.134484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.134494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.134828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.134836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.135166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.135175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.135350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.135359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.135649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.135659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.135946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.135955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.136271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.136279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.136610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.136618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.136958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.136967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.137177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.137185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.137522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.137530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.137825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.137832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.138027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.138035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.138371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.138378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.138693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.138706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.139005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.139014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.139336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.139345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.139674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.139687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.140034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.140042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.140466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.140475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.140749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.140759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.141046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.141054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.141370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.141379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.141581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.141590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.141928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.141939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.142276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.142284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.142611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.142620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.142857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.142865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.143174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.143183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.143511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.143520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.143753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.143763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.144118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.144125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.144433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.144441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.144819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.244 [2024-12-05 12:18:08.144828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.244 qpair failed and we were unable to recover it. 00:34:43.244 [2024-12-05 12:18:08.145112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.145120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.145342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.145350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.145636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.145646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.145967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.145975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.146319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.146329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.146635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.146644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.146974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.146982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.147177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.147187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.147429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.147439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.147774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.147783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.148088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.148098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.148422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.148432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.148760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.148769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.148951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.148962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.149243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.149251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.149485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.149494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.149800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.149807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.150171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.150179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.150508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.150516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.150693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.150699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.151053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.151061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.151380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.151390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.151619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.151627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.151964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.151972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.152178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.152186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.152528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.152535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.152859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.152868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.153192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.153200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.153510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.153518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.153864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.153873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.154200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.154209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.154527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.154535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.154843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.154851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.154953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.154960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.155183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.155190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.155414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.155422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.155773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.155781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.155997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.245 [2024-12-05 12:18:08.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.245 qpair failed and we were unable to recover it. 00:34:43.245 [2024-12-05 12:18:08.156237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.156246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.156629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.156636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.156938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.156946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.157285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.157292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.157524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.157718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.157735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.158098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.158105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.158435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.158444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.158664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.158672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.158880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.158888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.159120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.159128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.159449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.159463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.159770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.159778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.159983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.159990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.160317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.160324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.160488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.160496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.160713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.160721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.160806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.160813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.161093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.161101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.161418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.161426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.161737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.161745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.162070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.162078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.162360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.162368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.162681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.162689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.163019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.163027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.163350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.163358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.163636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.163644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.163973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.163981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.164186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.164194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.164407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.164415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.164760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.164768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.165086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.165094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.165170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.165178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.165514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.165522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.165823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.165831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.166050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.166058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.166249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.166257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.166469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.166477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.246 [2024-12-05 12:18:08.166792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.246 [2024-12-05 12:18:08.166802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.246 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.167076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.167085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.167262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.167272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.167509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.167517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.167824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.167832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.167930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.167938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.168241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.168249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.168719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.168729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.169043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.169051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.169261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.169268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.169438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.169446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.169818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.169825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.170133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.170143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.170344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.170354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.170577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.170585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.170846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.170854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.171231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.171239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.171552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.171560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.171886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.171894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.172195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.172210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.172541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.172549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.172960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.172967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.173296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.173305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.173676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.173684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.174008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.174016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.174387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.174396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.174775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.174783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.174972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.174981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.175326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.175335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.175563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.175571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.175884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.175892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.176207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.176215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.176661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.176669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.176996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.177004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.177210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.177218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.177559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.177567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.177909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.177916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.178220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.178228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.178549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.247 [2024-12-05 12:18:08.178557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.247 qpair failed and we were unable to recover it. 00:34:43.247 [2024-12-05 12:18:08.178880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.178889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.179212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.179220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.179528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.179536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.179943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.179951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.180278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.180286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.180669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.180677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.181001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.181009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.181337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.181345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.181638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.181645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.181989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.181997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.182205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.182213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.182443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.182451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.182757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.182766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.183131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.183140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.183485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.183495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.183880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.183889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.184291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.184298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.184630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.184638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.184967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.184975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.185305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.185313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.185635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.185643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.185956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.185965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.186290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.186297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.186631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.186639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.186942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.186949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.187347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.187355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.187564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.187572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.187794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.187802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.188141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.188150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.188477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.188486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.188706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.188714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.189035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.189043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.189381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.189389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.189782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.189790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.190120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.190128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.190344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.190351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.190809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.190818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.191164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.248 [2024-12-05 12:18:08.191172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.248 qpair failed and we were unable to recover it. 00:34:43.248 [2024-12-05 12:18:08.191489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.191496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.191817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.191825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.192024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.192032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.192364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.192374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.192727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.192739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.193034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.193042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.193347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.193356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.193662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.193672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.193985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.193993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.194305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.194313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.194413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.194420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.194718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.194727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.195060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.195069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.195364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.195373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.195544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.195553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.195863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.195870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.196022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.196029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.196322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.196331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.196650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.196657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.196857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.196865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.197238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.197246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.197621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.197629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.197862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.197869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.198220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.198228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.198482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.198490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.198831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.198839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.199170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.199178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.199522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.199530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.199745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.199753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.200003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.200010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.200363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.200370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.200619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.200628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.200805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.200813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.201160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.201168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.249 qpair failed and we were unable to recover it. 00:34:43.249 [2024-12-05 12:18:08.201483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.249 [2024-12-05 12:18:08.201492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.201799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.202005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.202012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.202384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.202393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.202779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.202788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.203113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.203122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.203320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.203329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.203652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.203662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.203989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.203998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.204298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.204307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.204635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.204645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.204970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.204981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.205291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.205299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.205474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.205481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.205802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.205810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.206251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.206258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.206566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.206574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.206944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.206951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.207262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.207271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.207490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.207498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.207849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.207856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.208063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.208070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.208320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.208328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.208695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.208703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.208880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.208888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.209208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.209217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.209551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.209559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.209879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.209887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.210185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.210192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.210509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.210517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.210867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.210874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.211202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.211210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.211597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.211605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.211941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.211949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.212283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.212291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.212483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.212491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.212767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.212776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.212987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.212994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.250 qpair failed and we were unable to recover it. 00:34:43.250 [2024-12-05 12:18:08.213295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.250 [2024-12-05 12:18:08.213305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.213606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.213614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.214020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.214029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.214354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.214361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.214661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.214669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.214998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.215006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.215316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.215324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.215632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.215640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.215809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.215817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.216142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.216150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.216502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.216510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.216734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.216741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.216941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.216949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.217289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.217297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.217609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.217617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.217918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.217926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.218259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.218267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.218583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.218591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.218899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.218907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.219089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.219097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.219540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.219549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.219763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.219770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.220072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.220080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.220406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.220413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.220803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.220811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.221167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.221174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.221497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.221809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.221817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.222121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.222129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.222446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.222459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.222667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.222674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.222882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.222889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.223194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.223203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.223403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.223410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.223749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.223757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.224087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.224095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.224407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.224415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.224809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.224817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.251 [2024-12-05 12:18:08.225131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.251 [2024-12-05 12:18:08.225140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.251 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.225442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.225451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.225663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.225671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.225733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.225742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.225997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.226004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.226331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.226340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.226630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.226638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.227068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.227075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.227398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.227406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.227627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.227636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.227850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.228179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.228186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.228482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.228489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.228830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.228838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.229153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.229160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.229392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.229400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.229718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.229726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.229901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.229909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.230272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.230280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.230627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.230635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.230992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.230999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.231308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.231317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.231520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.231527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.231801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.231809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.232022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.232029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.232418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.232425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.232847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.232856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.233171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.233178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.233353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.233360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.233620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.233628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.233971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.233981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.234288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.234295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.234625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.234633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.234850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.234858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.235198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.235206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.235533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.235542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.235846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.235854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.236178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.236186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.236523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.252 [2024-12-05 12:18:08.236531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.252 qpair failed and we were unable to recover it. 00:34:43.252 [2024-12-05 12:18:08.236942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.236951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.237207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.237216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.237537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.237545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.237883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.237891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.238204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.238211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.238537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.238545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.238844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.238852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.239160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.239170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.239416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.239424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.239741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.239750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.240066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.240073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.240230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.240236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.240341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.240349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.240524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.240532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.240853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.240861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.241157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.241166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.241493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.241502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.241866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.241873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.242184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.242193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.242531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.242539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.242790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.242798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.243122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.243130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.243316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.243324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.243670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.243677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.243987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.243996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.244205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.244212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.244538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.244546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.244782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.244790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.245114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.245122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.245334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.245343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.245607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.245615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.245994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.246002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.246298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.246308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.246577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.246585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.253 [2024-12-05 12:18:08.246941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.253 [2024-12-05 12:18:08.246949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.253 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.247290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.247298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.247633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.247641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.248064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.248072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.248182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.248188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.248475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.248484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.248686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.248694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.249023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.249030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.249361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.249370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.249725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.249733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.249938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.249946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.250036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.250044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.250105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.250113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.250443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.250450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.250683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.250692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.250915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.250924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.251243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.251251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.251421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.251429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.251781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.251790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.252014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.252022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.252390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.252398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.252627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.252634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.252994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.253001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.253222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.253231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.253568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.253576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.253903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.253914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.254111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.254119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.254459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.254467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.254773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.254781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.255114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.255122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.255447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.255460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.255770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.255779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.255954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.255962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.256179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.256186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.256492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.256501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.256826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.256833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.257171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.257180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.257497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.257504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.257775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.254 [2024-12-05 12:18:08.257782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.254 qpair failed and we were unable to recover it. 00:34:43.254 [2024-12-05 12:18:08.258078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.258086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.258416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.258424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.258742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.258750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.259060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.259069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.259445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.259452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.259851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.259858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.260270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.260278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.260495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.260503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.260817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.260826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.261145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.261152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.255 [2024-12-05 12:18:08.261351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.255 [2024-12-05 12:18:08.261358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.255 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.261776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.261788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.262009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.262017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.262342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.262351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.262535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.262545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.262751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.262758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.263152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.263161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.263380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.263389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.263770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.263778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.264113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.264121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.264469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.264478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.264771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.264779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.265189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.265197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.265547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.265556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.265942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.265950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.266280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.266288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.266466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.266474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.266776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.266787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.267121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.267129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.267438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.267447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.267664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.267671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.267977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.267985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.268317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.268325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.268659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.268667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.269006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.269015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.269318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.269326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.269632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.269640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.269847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.269854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.270194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.270202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.270534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.270542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.527 qpair failed and we were unable to recover it. 00:34:43.527 [2024-12-05 12:18:08.270875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.527 [2024-12-05 12:18:08.270884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.271096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.271104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.271383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.271391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.271725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.271733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.272130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.272137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.272412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.272420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.272597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.272606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.272948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.272956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.273140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.273148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.273533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.273542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.273890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.273897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.274203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.274212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.274495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.274502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.274765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.274773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.275109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.275118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.275431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.275441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.275820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.275828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.276151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.276159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.276373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.276380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.276786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.276794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.277097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.277106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.277330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.277339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.277654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.277662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.277830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.277839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.278046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.278054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.278355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.278363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.278693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.278701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.279037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.279045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.279364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.279374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.279685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.279695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.528 [2024-12-05 12:18:08.279880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.528 [2024-12-05 12:18:08.279888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.528 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.280223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.280230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.280661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.280669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.281013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.281021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.281344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.281352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.281592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.281600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.281920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.281928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.282250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.282258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.282587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.282595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.283004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.283011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.283349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.283358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.283653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.283661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.283990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.283998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.284324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.284335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.284631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.284641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.284973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.284980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.285191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.285199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.285527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.285535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.285870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.285878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.286199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.286208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.286535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.286544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.286885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.286893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.287210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.287218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.287528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.287536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.287756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.287765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.288109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.288120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.288465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.288475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.288769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.288778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.289068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.289076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.289393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.529 [2024-12-05 12:18:08.289401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.529 qpair failed and we were unable to recover it. 00:34:43.529 [2024-12-05 12:18:08.289708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.289716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.290034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.290042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.290379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.290387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.290616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.290625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.290794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.290802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.291083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.291092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.291420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.291428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.291757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.291766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.292068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.292076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.292220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.292229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.292485] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c9e10 is same with the state(6) to be set 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Write completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 Read completed with error (sct=0, sc=8) 00:34:43.530 starting I/O failed 00:34:43.530 [2024-12-05 12:18:08.293415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:34:43.530 [2024-12-05 12:18:08.294053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.294115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.294476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.294490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.294713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.294724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.294977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.294988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.295183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.295194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.295540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.530 [2024-12-05 12:18:08.295553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.530 qpair failed and we were unable to recover it. 00:34:43.530 [2024-12-05 12:18:08.295910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.295921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.296224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.296235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.296590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.296601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.296925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.296936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.297284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.297294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.297487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.297500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.297889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.297900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.298097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.298108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.298308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.298320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.298659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.298671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.298882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.298894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.299229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.299241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.299575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.299591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.299904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.299915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.300235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.300247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.300614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.300626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.300968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.300978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.301320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.301331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.301738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.301749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.302107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.302118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.302470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.302482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.302818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.302830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.303158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.303168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.303578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.303589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.303919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.303929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.304282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.304292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.304618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.304630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.304935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.304946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.305324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.531 [2024-12-05 12:18:08.305336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.531 qpair failed and we were unable to recover it. 00:34:43.531 [2024-12-05 12:18:08.305680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.305691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.306101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.306113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.306474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.306487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.306857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.306868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.307177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.307189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.307510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.307520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.307831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.307852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.308071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.308082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.308396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.308408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.308631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.308642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.308993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.309004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.309249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.309262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.309574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.309586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.309913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.309923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.310268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.310278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.310479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.310491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.310811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.310823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.311176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.311187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.311493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.311506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.311841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.311852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.312179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.312198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.312426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.312436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.312729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.312740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.313075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.313088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.313394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.313406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.532 [2024-12-05 12:18:08.313747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.532 [2024-12-05 12:18:08.313759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.532 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.313973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.313984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.314313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.314323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.314630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.314641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.314968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.314978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.315286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.315297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.315624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.315636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.315946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.315957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.316275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.316287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.316490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.316503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.316717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.316727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.317011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.317022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.317249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.317259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.317527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.317538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.317751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.317762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.318143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.318153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.318449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.318466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.318860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.318871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.319193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.319205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.319484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.319496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.319807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.319820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.320190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.320202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.320544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.320559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.320817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.320830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.321181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.321195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.321338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.321350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.321665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.321676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.322008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.533 [2024-12-05 12:18:08.322019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.533 qpair failed and we were unable to recover it. 00:34:43.533 [2024-12-05 12:18:08.322388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.322400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.322724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.322735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.322941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.322954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.323195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.323205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.323554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.323567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.323915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.323926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.324121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.324132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.324445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.324469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.324770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.324781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.324990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.325001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.325205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.325220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.325559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.325570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.325947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.325958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.326177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.326190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.326517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.326529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.326750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.326764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.326900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.326913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.327249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.327259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.327611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.327624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.327826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.327837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.328128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.328140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.328536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.328547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.328763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.328774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.329149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.329160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.329535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.329546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.329935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.329947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.330167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.330179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.330381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.330393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.330736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.330749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.331101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.331121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.534 [2024-12-05 12:18:08.331453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.534 [2024-12-05 12:18:08.331475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.534 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.331725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.331737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.331967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.331980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.332181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.332194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.332399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.332411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.332769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.332783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.333142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.333154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.333368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.333381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.333645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.333656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.334023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.334034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.334439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.334451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.334699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.334710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.334951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.334963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.335165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.335177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.335606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.335618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.335932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.335943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.336272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.336283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.336498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.336511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.336895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.336905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.337208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.337221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.337530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.337547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.337770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.337781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.337996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.338007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.338322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.338333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.338580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.338591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.338826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.338837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.339181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.339194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.339407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.339419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.535 qpair failed and we were unable to recover it. 00:34:43.535 [2024-12-05 12:18:08.339735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.535 [2024-12-05 12:18:08.339747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.340083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.340096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.340422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.340433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.340822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.340833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.341166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.341177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.341493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.341505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.341884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.341898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.342100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.342113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.342427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.342437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.342785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.342797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.343217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.343230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.343433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.343444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.343827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.343840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.344186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.344196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.344608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.344621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.344964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.344975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.345168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.345180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.345412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.345423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.345532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.345544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.345792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.345803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.346122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.346134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.346350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.346361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.346594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.346606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.346825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.346836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.347151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.347163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.347375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.347386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.347555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.347566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.347921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.347934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.348271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.348283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.348506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.536 [2024-12-05 12:18:08.348518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.536 qpair failed and we were unable to recover it. 00:34:43.536 [2024-12-05 12:18:08.348707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.348720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.348925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.348938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.349229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.349245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.349641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.349653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.349876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.349887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.350184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.350195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.350509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.350521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.350733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.350745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.351094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.351106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.351418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.351430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.351763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.351775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.352097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.352110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.352429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.352442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.352780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.352792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.353123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.353134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.353472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.353484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.353619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.353631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.353943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.353955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.354271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.354284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.354537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.354549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.354809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.354820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.355029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.355043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.355344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.355355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.355572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.355583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.355927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.355938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.356103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.356114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.356519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.356530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.356907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.356919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.537 [2024-12-05 12:18:08.357251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.537 [2024-12-05 12:18:08.357262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.537 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.357537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.357550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.357890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.357903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.358137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.358147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.358464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.358477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.358909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.358919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.359254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.359265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.359575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.359588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.359935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.359946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.360265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.360276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.360605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.360616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.360833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.360846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.361130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.361142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.361366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.361379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.361614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.361628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.361961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.361972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.362332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.362342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.362646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.362659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.362991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.363003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.363210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.363221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.363561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.363574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.364000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.364011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.364329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.364340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.364691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.364703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.364922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.364933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.538 qpair failed and we were unable to recover it. 00:34:43.538 [2024-12-05 12:18:08.365306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.538 [2024-12-05 12:18:08.365316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.365789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.365804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.366134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.366146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.366489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.366501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.366828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.366840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.367044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.367055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.367269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.367281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.367642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.367653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.368016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.368029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.368313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.368326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.368402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.368413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.368772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.368784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.369137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.369150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.369362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.369372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.369695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.369708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.370059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.370071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.370425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.370438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.370691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.370702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.370978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.370989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.371380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.371391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.371679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.371691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.372064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.372074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.372280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.372291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.372640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.372652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.372983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.372994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.373226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.373237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.373443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.373466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.373793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.373804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.374134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.374146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.374518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.539 [2024-12-05 12:18:08.374534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.539 qpair failed and we were unable to recover it. 00:34:43.539 [2024-12-05 12:18:08.374902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.374913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.375106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.375118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.375314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.375325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.375464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.375476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.375729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.375740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.376112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.376123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.376466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.376478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.376667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.376680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.376982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.376992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.377177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.377189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.377485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.377497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.377896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.377907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.378304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.378315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.378575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.378586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.378919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.378929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.379100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.379113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.379350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.379361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.379539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.379551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.379895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.379907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.380323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.380334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.380649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.380661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.380990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.381000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.381202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.381213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.381539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.381551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.381924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.381936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.382273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.382284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.382523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.382535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.382931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.540 [2024-12-05 12:18:08.382943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.540 qpair failed and we were unable to recover it. 00:34:43.540 [2024-12-05 12:18:08.383277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.383290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.383465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.383478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.383894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.383905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.384233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.384245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.384589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.384602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.384667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.384678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.384924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.384935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.385309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.385321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.385640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.385650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.385955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.385967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.386297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.386307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.386661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.386676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.387005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.387015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.387341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.387352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.387697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.387708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.388048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.388060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.388336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.388346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.388708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.388719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.389032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.389042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.389377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.389389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.389626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.389636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.389969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.389980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.390344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.390354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.390729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.390740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.390947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.390957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.391292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.391302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.391544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.391555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.391768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.391779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.392132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.392143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.392344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.392354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.392633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.392644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.541 [2024-12-05 12:18:08.392853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.541 [2024-12-05 12:18:08.392863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.541 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.393179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.393189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.393482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.393493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.393801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.393811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.394019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.394029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.394353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.394364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.394543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.394555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.394861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.394873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.395201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.395211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.395423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.395433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.395617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.395628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.395971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.395981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.396272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.396283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.396497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.396509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.396742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.396752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.397066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.397077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.397313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.397323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.397658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.397670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.397873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.397883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.398218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.398229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.398583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.398600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.398912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.398923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.399160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.399171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.399401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.399412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.399718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.399729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.399977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.399987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.400207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.400218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.400522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.400534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.400763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.400773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.401102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.401113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.542 [2024-12-05 12:18:08.401339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.542 [2024-12-05 12:18:08.401350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.542 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.401568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.401580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.401792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.401803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.402131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.402142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.402463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.402474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.402788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.402798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.403095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.403106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.403319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.403330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.403637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.403648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.403855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.403866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.404191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.404201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.404427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.404437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.404560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.404571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.404900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.404911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.405233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.405244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.405491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.405502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.405842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.405853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.406181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.406192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.406414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.406426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.406754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.406767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.407038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.407049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.407258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.407269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.407418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.407429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.407836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.407847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.408172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.408183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.408434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.408444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.408649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.408660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.408887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.408898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.409245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.409255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.409593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.409604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.409943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.409957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.410159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.410171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.543 qpair failed and we were unable to recover it. 00:34:43.543 [2024-12-05 12:18:08.410390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.543 [2024-12-05 12:18:08.410402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.410660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.410671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.410990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.411000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.411209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.411220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.411417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.411428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.411802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.411814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.412021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.412032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.412323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.412334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.412548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.412560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.412738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.412749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.412947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.412957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.413042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.413052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.413395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.413405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.413495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.413505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.413672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.413683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.413900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.413911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.414210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.414221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.414471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.414481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.414653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.414665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.414755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.414765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.415040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.415051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.415390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.415401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.415721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.415732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.544 qpair failed and we were unable to recover it. 00:34:43.544 [2024-12-05 12:18:08.416046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.544 [2024-12-05 12:18:08.416057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.416387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.416397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.416808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.416822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.417034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.417045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.417400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.417412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.417558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.417570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.417856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.417866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.418072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.418083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.418310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.418321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.418538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.418550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.418794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.418804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.419127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.419138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.419466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.419477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.419657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.419668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.420016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.420027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.420343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.420353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.420699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.420712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.420977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.421289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.421300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.421516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.421527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.421709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.421720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.421944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.421954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.422171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.422182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.422412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.422422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.422830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.422841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.423151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.423162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.423506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.423517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.423823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.423834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.424122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.545 [2024-12-05 12:18:08.424133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.545 qpair failed and we were unable to recover it. 00:34:43.545 [2024-12-05 12:18:08.424472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.424483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.424793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.424804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.425135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.425147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.425497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.425508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.425806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.425817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.426195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.426207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.426522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.426533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.426867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.426877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.427096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.427107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.427398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.427408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.427738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.427750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.428109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.428119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.428309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.428319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.428406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.428420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.428624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.428636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.428928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.428939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.429162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.429173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.429421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.429431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.429680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.429690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.430011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.430022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.430315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.430325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.430640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.430651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.430947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.430958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.431262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.431273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.431600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.431610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.431931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.431943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.432258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.432268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.432471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.432482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.432717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.546 [2024-12-05 12:18:08.432727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.546 qpair failed and we were unable to recover it. 00:34:43.546 [2024-12-05 12:18:08.433068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.433078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.433400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.433420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.433582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.433938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.433948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.434266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.434276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.434581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.434592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.434927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.434937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.435306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.435316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.435547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.435558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.435860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.435870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.436182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.436194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.436518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.436529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.436792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.436802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.437154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.437164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.437350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.437360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.437702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.437712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.438021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.438032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.438402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.438413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.438575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.438586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.438923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.438933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.439235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.439256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.439472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.439483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.439814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.439825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.440025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.440035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.440380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.440393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.440650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.440660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.441071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.441082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.441386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.441397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.441631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.441642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.441985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.441996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.547 qpair failed and we were unable to recover it. 00:34:43.547 [2024-12-05 12:18:08.442182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.547 [2024-12-05 12:18:08.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.442564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.442575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.442946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.442957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.443133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.443145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.443339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.443350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.443734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.443746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.444086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.444096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.444342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.444353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.444777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.444788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.444991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.445001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.445353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.445363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.445689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.445699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.446012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.446022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.446346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.446357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.446665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.446676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.446990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.447009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.447242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.447252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.447609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.447620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.447831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.447842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.448178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.448188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.448521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.448532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.448863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.448873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.449037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.449047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.449377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.449388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.449712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.449724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.449969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.449978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.450225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.450235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.450502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.450514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.450855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.450865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.451114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.451124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.548 qpair failed and we were unable to recover it. 00:34:43.548 [2024-12-05 12:18:08.451218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.548 [2024-12-05 12:18:08.451229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.451551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.451561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.451851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.451862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.452078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.452090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.452413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.452426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.452634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.452646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.452930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.452940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.453156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.453166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.453486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.453497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.453813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.453833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.454068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.454078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.454387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.454407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.454725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.454736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.455123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.455133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.455333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.455343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.455763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.455774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.456135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.456147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.456505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.456518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.456787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.456799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.457101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.457112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.457445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.457463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.457780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.457790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.458102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.458113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.458468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.458479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.458813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.458824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.459131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.459142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.459443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.459461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.459816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.459826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.460143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.460153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.460372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.549 [2024-12-05 12:18:08.460383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.549 qpair failed and we were unable to recover it. 00:34:43.549 [2024-12-05 12:18:08.460609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.460622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.460924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.460937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.461123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.461135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.461498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.461511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.461807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.461818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.462062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.462072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.462284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.462295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.462510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.462520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.462704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.462716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.463101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.463112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.463469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.463480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.463780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.463791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.464016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.464026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.464324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.464335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.464672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.464686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.464887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.464897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.465251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.465261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.465582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.465592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.465780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.465792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.466142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.466154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.466478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.466490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.466827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.466838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.467166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.467177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.467525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.550 [2024-12-05 12:18:08.467536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.550 qpair failed and we were unable to recover it. 00:34:43.550 [2024-12-05 12:18:08.467745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.467755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.468114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.468125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.468351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.468362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.468690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.468701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.468916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.468927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.469103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.469114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.469408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.469418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.469744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.469755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.469996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.470006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.470321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.470332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.470527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.470539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.470852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.470862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.471246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.471257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.471640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.471652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.471901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.471911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.472250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.472261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.472651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.472663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.472850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.472860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.473203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.473213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.473575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.473587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.473941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.473951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.474193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.474203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.474257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.474268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.474487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.474498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.474709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.474720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.475043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.475054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.475414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.475425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.475720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.475731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.476032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.476044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.551 [2024-12-05 12:18:08.476404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.551 [2024-12-05 12:18:08.476416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.551 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.476721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.476736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.477049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.477060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.477391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.477403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.477595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.477607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.477950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.477962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.478284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.478629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.478640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.479039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.479049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.479364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.479376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.479678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.479688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.480117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.480128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.480506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.480517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.480897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.480907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.481095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.481106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.481420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.481431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.481786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.481799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.481983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.481994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.482210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.482221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.482527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.482539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.482885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.482897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.483088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.483099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.483440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.483450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.483746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.483757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.484155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.484165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.484541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.484552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.484901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.484912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.485261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.485272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.485611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.485621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.485957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.552 [2024-12-05 12:18:08.485967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.552 qpair failed and we were unable to recover it. 00:34:43.552 [2024-12-05 12:18:08.486172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.486527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.486538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.486756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.486767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.487087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.487097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.487319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.487330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.487666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.487677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.488013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.488023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.488406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.488417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.488768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.488779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.488995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.489005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.489332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.489343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.489762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.489775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.490084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.490096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.490296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.490306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.490505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.490517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.490917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.490928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.491226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.491238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.491595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.491606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.491991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.492003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.492330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.492340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.492644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.492655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.492998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.493008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.493326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.493339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.493682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.493693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.494003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.494013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.494342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.494352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.494550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.494561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.494946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.494957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.495278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.553 [2024-12-05 12:18:08.495291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.553 qpair failed and we were unable to recover it. 00:34:43.553 [2024-12-05 12:18:08.495634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.495646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.495959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.495969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.496314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.496326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.496658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.496669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.496983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.496994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.497301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.497313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.497632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.497644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.497991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.498002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.498311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.498323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.498548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.498559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.498856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.498867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.499194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.499205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.499524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.499536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.499749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.499761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.499988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.499999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.500376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.500386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.500614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.500625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.501006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.501016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.501337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.501348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.501612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.501623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.501919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.501929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.502321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.502331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.502582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.502596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.502933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.502943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.503249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.503260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.503592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.503603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.503818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.503828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.504017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.504028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.504346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.554 [2024-12-05 12:18:08.504358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.554 qpair failed and we were unable to recover it. 00:34:43.554 [2024-12-05 12:18:08.504761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.504774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.505093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.505103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.505284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.505295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.505495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.505507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.505824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.505834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.506244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.506254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.506606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.506618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.506946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.506957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.507349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.507361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.507751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.507762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.508062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.508073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.508469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.508483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.508831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.508842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.509185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.509195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.509539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.509550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.509890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.509901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.510217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.510228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.510617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.510629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.510988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.510998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.511341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.511352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.511681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.511693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.512018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.512028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.512264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.512274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.512498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.512509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.512864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.512874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.513179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.513191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.513372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.513385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.513620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.513632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.513999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.555 [2024-12-05 12:18:08.514009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.555 qpair failed and we were unable to recover it. 00:34:43.555 [2024-12-05 12:18:08.514368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.514380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.514689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.514700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.515015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.515025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.515334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.515346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.515555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.515569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.515767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.515778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.516096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.516107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.516472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.516811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.516821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.517134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.517148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.517483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.517495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.517826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.517837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.518055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.518065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.518395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.518405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.518706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.519072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.519082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.519475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.519486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.519821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.519833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.520191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.520203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.520432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.520443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.520791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.520801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.521108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.521119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.521470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.521481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.521822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.521834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.522143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.522154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.522490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.522502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.522830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.556 [2024-12-05 12:18:08.522840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.556 qpair failed and we were unable to recover it. 00:34:43.556 [2024-12-05 12:18:08.523194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.523205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.523530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.523541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.523760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.523770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.524118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.524129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.524466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.524477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.524788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.524799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.525099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.525110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.525447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.525467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.525831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.525841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.526147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.526159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.526353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.526364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.526660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.526672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.526994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.527004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.527339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.527350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.527569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.527922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.527933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.528146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.528157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.528437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.528451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.528789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.528801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.529087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.529098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.529432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.529443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.529768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.529779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.530104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.530114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.530446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.530464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.530789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.530800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.531113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.531125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.531495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.531506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.531854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.531865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.532187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.532197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.557 qpair failed and we were unable to recover it. 00:34:43.557 [2024-12-05 12:18:08.532524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.557 [2024-12-05 12:18:08.532535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.532862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.532874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.533090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.533102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.535855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.535961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.536364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.536401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.536783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.536816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.537235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.537264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.537624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.537655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.538019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.538049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.538295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.538330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.538683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.538714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.539053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.539082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.539451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.539500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.539844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.539877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.540252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.540282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.540726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.540761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.541154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.541183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.541546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.541577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.541823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.541854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.542218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.542249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.542618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.542650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.543010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.543040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.543398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.543428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.543706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.543737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.544051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.544081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.544446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.544492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.544872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.544902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.545257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.545286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.545541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.545579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.558 [2024-12-05 12:18:08.545953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.558 [2024-12-05 12:18:08.545982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.558 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.546345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.546375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.546740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.546770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.547126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.547156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.547515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.547546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.547912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.547942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.548086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.548117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.548501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.548532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.548928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.548956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.549312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.549341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.549687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.549719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.550081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.550110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.550483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.550514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.550773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.550805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.551181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.551212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.551475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.551508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.551889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.551918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.552287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.552316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.552632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.552661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.552929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.552958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.553194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.553222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.553593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.553625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.553989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.554018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.554393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.554430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.554848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.554879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.555251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.555281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.555640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.555673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.556030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.556060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.556498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.556530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.556883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.559 [2024-12-05 12:18:08.556913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.559 qpair failed and we were unable to recover it. 00:34:43.559 [2024-12-05 12:18:08.557198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.557227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.557588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.557618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.557974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.558003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.558436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.558477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.558875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.558905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.559250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.559280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.559649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.559681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.560050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.560079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.560446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.560488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.560732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.560772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.561155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.561184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.561618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.561650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.561924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.561953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.562301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.562330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.562686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.562718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.563074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.563103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.563539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.563570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.563929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.563959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.560 [2024-12-05 12:18:08.564367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.560 [2024-12-05 12:18:08.564397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.560 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.564667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.564700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.565044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.565076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.565435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.565480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.565840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.565871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.566116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.566145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.566500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.566537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.566808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.566836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.567198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.567228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.567494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.567527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.567898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.567929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.568310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.568340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.568707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.568737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.569169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.569199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.569495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.569525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.833 qpair failed and we were unable to recover it. 00:34:43.833 [2024-12-05 12:18:08.569900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.833 [2024-12-05 12:18:08.569930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.570176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.570210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.570583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.570614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.571044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.571076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.571416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.571446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.571751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.571781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.572066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.572097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.572452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.572505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.572836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.572864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.573221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.573252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.573612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.573645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.574038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.574068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.574429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.574471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.574833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.574864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.575075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.575104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.575477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.575508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.575872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.575909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.576278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.576309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.576677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.576708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.577074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.577104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.577312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.577341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.577717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.577749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.578089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.578120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.578469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.578501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.578844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.578875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.579134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.579163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.579588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.579619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.579866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.834 [2024-12-05 12:18:08.579897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.834 qpair failed and we were unable to recover it. 00:34:43.834 [2024-12-05 12:18:08.580248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.580278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.580678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.580709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.581073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.581104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.581358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.581388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.581752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.581784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.582126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.582155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.582499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.582532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.582873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.582903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.583264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.583295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.583556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.583587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.583942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.583971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.584318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.584349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.584700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.584731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.585100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.585131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.585377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.585409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.585821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.585860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.586255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.586286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.586477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.586510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.586903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.586932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.587273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.587304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.587641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.587674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.588085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.588115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.588500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.588536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.588876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.588907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.589154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.589183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.589565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.589596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.589964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.589994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.590437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.590477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.590794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.590824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.591193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.591223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.835 qpair failed and we were unable to recover it. 00:34:43.835 [2024-12-05 12:18:08.591486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.835 [2024-12-05 12:18:08.591516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.591893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.591925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.592292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.592322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.592575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.592610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.592897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.592926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.593298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.593332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.593677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.593710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.594076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.594108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.594480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.594513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.594870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.594901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.595261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.595293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.595644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.595676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.596050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.596081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.596436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.596496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.596865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.596896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.597228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.597258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.597510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.597544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.597928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.597958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.598328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.598359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.598727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.598760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.599117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.599147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.599587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.599620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.599984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.600014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.600382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.600414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.600814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.600846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.601193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.601230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.601575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.601606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.601968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.601998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.602351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.602383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.602638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.602671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.836 qpair failed and we were unable to recover it. 00:34:43.836 [2024-12-05 12:18:08.603060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.836 [2024-12-05 12:18:08.603092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.603471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.603504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.603756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.603785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.604160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.604191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.604519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.604551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.604796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.604826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.605205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.605235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.605651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.605683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.606070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.606103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.606477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.606511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.606851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.606884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.607216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.607245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.607591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.607623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.607979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.608009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.608380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.608411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.608813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.608846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.609207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.609238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.609377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.609410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.609832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.609864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.610088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.610120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.610377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.610409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.610763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.610798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.611033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.611064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.611424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.611471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.611880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.611910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.612154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.612183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.612439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.612486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.612823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.612853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.613225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.613256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.613615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.613646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.837 qpair failed and we were unable to recover it. 00:34:43.837 [2024-12-05 12:18:08.614002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.837 [2024-12-05 12:18:08.614033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.614400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.614431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.614807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.614839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.615207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.615240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.615583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.615616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.615965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.616002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.616348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.616379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.616743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.616774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.617005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.617037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.617287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.617320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.617682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.617715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.618072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.618102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.618481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.618512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.618869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.618900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.619273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.619307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.619695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.619726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.620092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.620123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.620499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.620531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.620928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.620958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.621313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.621345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.621573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.621604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.621959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.621992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.622351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.622381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.622755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.622788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.623148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.623177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.623549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.623580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.623952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.623982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.624237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.624268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.624608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.624641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.625009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.625038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.625394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.838 [2024-12-05 12:18:08.625423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.838 qpair failed and we were unable to recover it. 00:34:43.838 [2024-12-05 12:18:08.625802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.625833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.626170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.626200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.626489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.626521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.626900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.626930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.627183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.627218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.627735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.627768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.628024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.628056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.628422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.628483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.628855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.628886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.629246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.629280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.629542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.629574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.629843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.629876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.630267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.630298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.630693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.630724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.631098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.631137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.631487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.631518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.631790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.631823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.632119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.632148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.632421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.632450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.632827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.632856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.633237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.633268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.633607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.633639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.633983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.634012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.839 [2024-12-05 12:18:08.634357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.839 [2024-12-05 12:18:08.634386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.839 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.634718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.634750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.635107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.635135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.635502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.635533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.635906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.635935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.636316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.636345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.636788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.636818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.637182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.637211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.637577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.637607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.637973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.638002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.638363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.638392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.638796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.638827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.639172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.639201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.639476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.639506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.639778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.639807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.640178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.640207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.640528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.640558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.640948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.640978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.641331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.641362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.641621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.641651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.642057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.642086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.642443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.642495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.642907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.642938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.643300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.643330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.643692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.643723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.643964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.643993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.644340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.644369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.644773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.644803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.645153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.645183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.840 [2024-12-05 12:18:08.645527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.840 [2024-12-05 12:18:08.645558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.840 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.645929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.645958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.646321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.646356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.646775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.646806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.647169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.647198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.647577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.647608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.647977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.648007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.648368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.648396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.648776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.648806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.649185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.649215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.649574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.649604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.649832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.649865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.650093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.650126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.650499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.650530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.650904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.650933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.651296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.651325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.651646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.651677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.651796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.651827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.652179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.652209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.652468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.652499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.652878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.652908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.653237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.653265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.653615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.653645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.653994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.654024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.654379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.654410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.654831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.654863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.655098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.655128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.655597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.655629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.655961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.655992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.656368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.656398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.841 [2024-12-05 12:18:08.656791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.841 [2024-12-05 12:18:08.656822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.841 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.657179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.657209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.657561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.657592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.657965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.657993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.658364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.658393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.658796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.658827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.659176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.659205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.659543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.659575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.659852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.659880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.660231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.660260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.660626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.660655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.661036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.661064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.661319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.661356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.661698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.661729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.662096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.662124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.662498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.662528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.662918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.662948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.663318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.663348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.663693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.663724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.663969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.663998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.664211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.664239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.664574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.664604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.664944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.664975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.665357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.665387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.665651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.665681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.666064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.666093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.666343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.666372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.666815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.666846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.667185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.667216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.667581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.842 [2024-12-05 12:18:08.667613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.842 qpair failed and we were unable to recover it. 00:34:43.842 [2024-12-05 12:18:08.667958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.667989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.668483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.668514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.668770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.668798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.669213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.669242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.669588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.669618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.669988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.670017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.670380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.670411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.670765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.670796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.671169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.671199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.671574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.671605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.671961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.671991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.672353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.672381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.672742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.672773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.673140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.673169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.673533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.673563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.673894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.673924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.674287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.674317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.674656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.674687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.675052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.675083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.675491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.675523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.675861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.675890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.676268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.676297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.676556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.676592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.676949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.676977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.677341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.677371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.677760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.678124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.678153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.678501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.678532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.678884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.678912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.679275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.843 [2024-12-05 12:18:08.679306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.843 qpair failed and we were unable to recover it. 00:34:43.843 [2024-12-05 12:18:08.679644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.679675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.680039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.680069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.680431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.680473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.680830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.680860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.681228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.681258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.681600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.681631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.682002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.682032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.682380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.682409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.682801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.682832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.683192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.683225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.683594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.683626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.683996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.684025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.684388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.684417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.684692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.685081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.685110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.685485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.685516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.685763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.685793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.686154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.686539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.686570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.687010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.687041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.687392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.687422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.687810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.687841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.688051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.688080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.688420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.688450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.688816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.688847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.689209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.689237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.689606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.689637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.689971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.689999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.690362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.690391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.690762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.690793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.844 [2024-12-05 12:18:08.691158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.844 [2024-12-05 12:18:08.691187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.844 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.691551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.691581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.691957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.691993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.692406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.692435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.692799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.692830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.693206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.693237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.693646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.693677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.694041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.694070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.694432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.694470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.694841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.694872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.695236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.695267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.695635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.695666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.696014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.696045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.696294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.696323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.696600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.696631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.696917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.696946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.697331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.697362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.697742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.697775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.698145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.698175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.698551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.698582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.698999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.699029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.699403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.699432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.699828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.699859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.700204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.700235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.700672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.700702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.701042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.701072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.845 qpair failed and we were unable to recover it. 00:34:43.845 [2024-12-05 12:18:08.701435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.845 [2024-12-05 12:18:08.701486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.701818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.701847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.702100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.702129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.702504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.702536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.702897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.702925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.703175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.703203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.703591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.703622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.703968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.703997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.704361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.704390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.704749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.704780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.705144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.705173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.705546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.705577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.705955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.705984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.706348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.706377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.706747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.706778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.707172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.707201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.707574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.707611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.707974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.708003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.708366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.708395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.708762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.708793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.709160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.709192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.709580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.709610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.709971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.710007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.710362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.710392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.710731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.710767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.711145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.711176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.711535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.711566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.711923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.711951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.712196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.712224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.846 [2024-12-05 12:18:08.712577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.846 [2024-12-05 12:18:08.712608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.846 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.712946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.712976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.713331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.713365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.713803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.713834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.714198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.714228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.714597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.714627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.714990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.715018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.715381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.715410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.715650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.715681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.716022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.716052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.716415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.716444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.716700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.716730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.717112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.717140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.717515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.717545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.717934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.717964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.718224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.718256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.718647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.718679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.719050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.719079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.719439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.719479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.719855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.719884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.720262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.720292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.720479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.720509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.720847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.720876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.721219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.721248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.721607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.721638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.721998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.722027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.722396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.722425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.722809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.722849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.723190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.723221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.723580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.723611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.723959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.847 [2024-12-05 12:18:08.723989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.847 qpair failed and we were unable to recover it. 00:34:43.847 [2024-12-05 12:18:08.724364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.724394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.724763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.724793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.725152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.725181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.725571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.725601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.725970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.725999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.726355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.726385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.726633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.726664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.727048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.727078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.727310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.727341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.727695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.727726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.728093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.728123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.728573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.728606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.728877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.728912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.729284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.729315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.729544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.729576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.729963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.729994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.730359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.730388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.730763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.730793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.731156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.731186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.731548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.731578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.731953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.731983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.732353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.732382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.732756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.732787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.733032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.733064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.733300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.733329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.733714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.733744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.734125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.734154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.734517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.734548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.734920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.734949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.735387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.848 [2024-12-05 12:18:08.735417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.848 qpair failed and we were unable to recover it. 00:34:43.848 [2024-12-05 12:18:08.735772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.735802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.736181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.736211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.736471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.736504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.736866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.736895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.737257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.737286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.737627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.737658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.738019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.738055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.738392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.738422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.738809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.738840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.739201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.739230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.739576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.739607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.739973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.740002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.740372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.740401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.740753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.740784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.741016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.741044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.741496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.741527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.741892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.741921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.742255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.742283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.742655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.742686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.743048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.743077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.743437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.743478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.743876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.743906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.744325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.744354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.744773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.744803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.745080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.745109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.745478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.745510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.745852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.745881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.746251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.746280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.746622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.849 [2024-12-05 12:18:08.746653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.849 qpair failed and we were unable to recover it. 00:34:43.849 [2024-12-05 12:18:08.747011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.747039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.747408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.747437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.747815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.747846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.748226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.748255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.748656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.748687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.748942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.748972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.749353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.749382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.749655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.749686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.750144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.750176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.750412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.750442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.750816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.750847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.751211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.751242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.751677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.751707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.752050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.752078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.752440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.752482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.752901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.752931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.753273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.753302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.753729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.753765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.754116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.754146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.754544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.754610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.754905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.754939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.755272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.755303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.755662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.755694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.756030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.756060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.756421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.756451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.756749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.850 [2024-12-05 12:18:08.756782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.850 qpair failed and we were unable to recover it. 00:34:43.850 [2024-12-05 12:18:08.757139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.757532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.757563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.757956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.757985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.758238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.758268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.758492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.758523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.758869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.758899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.759271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.759300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.759553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.759955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.759985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.760324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.760354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.760616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.760647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.761016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.761046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.761403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.761434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.761913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.761945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.762285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.762315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.762733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.762765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.763107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.763135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.763513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.763546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.763930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.763962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.764218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.764248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.764684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.764715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.765093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.765122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.765487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.765518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.765867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.765898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.766255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.766286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.766638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.766669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.767034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.767064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.767430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.851 [2024-12-05 12:18:08.767480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.851 qpair failed and we were unable to recover it. 00:34:43.851 [2024-12-05 12:18:08.767821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.767852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.768209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.768239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.768604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.768635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.769066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.769102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.769440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.769482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.769862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.769891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.770245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.770275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.770653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.770683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.771049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.771080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.771438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.771479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.771722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.771755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.772135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.772164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.772421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.772450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.772815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.772845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.773212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.773536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.773569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.773900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.773930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.774292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.774322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.774678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.774710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.775073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.775103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.775490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.775522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.775896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.775927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.776265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.776295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.776671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.776702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.777002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.777031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.777402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.777432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.777812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.777845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.778184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.778214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.778589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.852 [2024-12-05 12:18:08.778622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.852 qpair failed and we were unable to recover it. 00:34:43.852 [2024-12-05 12:18:08.778980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.779010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.779367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.779404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.779823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.779855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.780100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.780133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.780563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.780595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.780940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.780970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.781332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.781362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.781709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.781741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.782107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.782138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.782514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.782546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.782902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.782931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.783277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.783308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.783687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.783718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.784123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.784153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.784514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.784545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.784911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.784941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.785308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.785338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.785687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.785717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.786079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.786109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.786476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.786508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.786854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.786884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.787233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.787264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.787632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.787664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.788026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.788055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.788421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.788451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.788810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.788840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.789203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.789233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.789577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.789609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.789981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.790011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.853 [2024-12-05 12:18:08.790404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.853 [2024-12-05 12:18:08.790434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.853 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.790781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.790813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.791192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.791221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.791587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.791619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.791990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.792020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.792376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.792405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.792816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.792847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.793214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.793244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.793587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.793619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.793957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.793988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.794352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.794382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.794813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.794843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.795186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.795221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.795560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.795591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.795940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.795969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.796334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.796363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.796705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.796736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.797153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.797183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.797538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.797568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.797936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.797966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.798327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.798357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.798718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.798748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.799111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.799140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.799514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.799545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.799794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.799825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.800159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.800187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.800559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.800590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.800940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.800969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.801335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.801364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.801612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.801643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.802016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.802045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.854 [2024-12-05 12:18:08.802409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.854 [2024-12-05 12:18:08.802439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.854 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.802784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.802815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.803177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.803207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.803637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.804011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.804040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.804279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.804308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.804602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.804635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.804905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.804934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.805323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.805353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.805699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.805731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.806151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.806180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.806517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.806548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.806806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.806835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.807280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.807311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.807672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.807703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.808043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.808072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.808290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.808322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.808666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.808697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.809122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.809150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.809483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.809514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.809869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.809899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.810311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.810347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.810675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.810706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.811077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.811107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.811478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.811510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.811761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.811790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.812131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.812161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.812509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.812539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.812911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.812939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.813319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.813349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.813734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.855 [2024-12-05 12:18:08.813765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.855 qpair failed and we were unable to recover it. 00:34:43.855 [2024-12-05 12:18:08.814129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.814159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.814420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.814451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.814818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.814848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.815210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.815240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.815609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.815640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.816080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.816109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.816479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.816511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.816764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.816794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.817168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.817197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.817564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.817595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.817965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.817993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.818329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.818359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.818593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.818624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.818772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.818802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.819054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.819084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.819319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.819351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.819718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.819749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.820113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.820143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.820487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.820518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.820920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.820950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.821201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.821233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.821592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.821623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.822007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.822038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.856 [2024-12-05 12:18:08.822398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.856 [2024-12-05 12:18:08.822429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.856 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.822823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.822854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.823093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.823125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.823481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.823513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.823862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.823891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.824270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.824299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.824678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.824709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.825084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.825125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.825550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.825582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.825930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.825959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.826324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.826353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.826610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.826640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.826992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.827022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.827390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.827421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.827788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.827820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.828107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.828138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.828385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.828413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.828854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.828885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.829249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.829278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.829641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.829673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.830039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.830072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.830440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.830490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.830849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.830878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.831242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.831272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.831643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.831675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.832017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.832048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.832305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.832334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.832697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.832729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.833076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.833105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.833359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.833388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.833811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.857 [2024-12-05 12:18:08.833844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.857 qpair failed and we were unable to recover it. 00:34:43.857 [2024-12-05 12:18:08.834080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.834111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.834555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.834587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.834953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.834985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.835341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.835374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.835679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.835710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.836119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.836152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.836513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.836546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.836904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.836934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.837164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.837192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.837440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.837485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.837876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.837906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.838298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.838662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.838694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.839066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.839096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.839345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.839376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.839638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.839670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.839954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.839990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.840341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.840371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.840719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.840749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.841118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.841149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.841577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.841608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.841928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.841959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.842333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.842365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.842774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.842804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.843022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.843055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.843437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.843480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.843813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.843843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.844224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.844254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.844615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.844648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.845026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.845056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.858 [2024-12-05 12:18:08.845424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.858 [2024-12-05 12:18:08.845464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.858 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.845726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.845759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.846154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.846184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.846562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.846595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.846846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.846875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.847353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.847383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.847655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.847686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.848084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.848113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.848537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.848569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.848814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.848846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.849206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.849237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.849680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.849711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.850048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.850079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.850328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.850362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.850732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.850765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.851124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.851152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.851532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.851562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.851952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.851981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.852346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.852375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.852779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.852811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.853165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.853193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.853561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.853592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.853939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.853968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.854337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.854366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.854715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.854746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.855111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.855142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.855512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.855549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.855892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.855921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.856270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.856299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.856557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.859 [2024-12-05 12:18:08.856587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.859 qpair failed and we were unable to recover it. 00:34:43.859 [2024-12-05 12:18:08.856940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.856968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.857315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.857344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.857702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.857734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.857942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.857972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.858324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.858353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.858750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.858781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.859218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.859247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.859498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.859527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.859894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.859924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.860275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.860304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.860567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.860598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.861031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.861060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.861294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.861326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.861668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.861699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.862081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.862111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.862478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.862509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.862768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.862797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.863063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.863092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.863470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.863501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.863755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.863784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.864147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.864177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.864537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.864569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.864818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.864848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.865107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.865139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.865505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.865536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.865807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.865836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.866188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.866216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.866665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.866695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.867035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.867064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.867405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.860 [2024-12-05 12:18:08.867433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.860 qpair failed and we were unable to recover it. 00:34:43.860 [2024-12-05 12:18:08.867824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.867855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.868189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.868218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.868548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.868579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.868952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.868983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.869351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.869381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.869742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.869773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.870031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.870067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.870331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.870363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:43.861 [2024-12-05 12:18:08.870715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:43.861 [2024-12-05 12:18:08.870746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:43.861 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.871105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.871139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.871393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.871424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.871749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.871780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.872126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.872155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.872535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.872567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.872921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.872950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.873318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.873347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.873748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.873780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.874135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.874165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.874616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.874647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.874996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.875028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.875404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.875435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.875829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.875862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.876103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.876132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.876499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.876530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.876766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.876794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.877185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.877214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.877577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.877608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.877827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.877859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.133 [2024-12-05 12:18:08.878221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.133 [2024-12-05 12:18:08.878251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.133 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.878606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.878637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.879030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.879060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.879424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.879473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.879740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.879771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.880142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.880173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.880583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.880616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.881011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.881041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.881408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.881437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.881844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.881875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.882235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.882266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.882616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.882646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.882994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.883024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.883293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.883321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.883620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.883651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.884034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.884063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.884453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.884497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.884850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.884879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.885247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.885284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.885646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.885677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.885926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.885956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.886319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.886349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.886652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.886682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.887036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.887065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.887449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.887491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.887747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.887777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.888191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.888221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.888592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.888624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.889007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.889036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.889403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.889432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.889800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.889830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.890200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.890229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.890578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.890610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.890991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.891021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.891372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.891402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.891839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.891870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.892203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.892233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.892588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.892620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.134 qpair failed and we were unable to recover it. 00:34:44.134 [2024-12-05 12:18:08.893007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.134 [2024-12-05 12:18:08.893037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.893410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.893439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.893825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.893855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.894209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.894238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.894486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.894518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.894809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.894840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.895215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.895244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.895533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.895565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.895949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.895978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.896340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.896369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.896705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.896736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.897106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.897137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.897505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.897536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.897918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.897947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.898316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.898345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.898722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.898754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.899126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.899155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.899437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.899478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.899913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.899943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.900281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.900313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.900566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.900611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.900855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.900886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.901236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.901266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.901609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.901640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.902006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.902036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.902406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.902435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.902819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.902850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.903223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.903252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.903604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.903636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.904017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.904047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.904407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.904436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.904707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.904738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.905146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.905176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.905544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.905575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.905951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.905981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.906346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.906376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.906732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.906764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.907124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.135 [2024-12-05 12:18:08.907153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.135 qpair failed and we were unable to recover it. 00:34:44.135 [2024-12-05 12:18:08.907530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.907562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.907950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.907981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.908320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.908350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.908593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.908625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.908891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.908921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.909286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.909316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.909709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.909740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.910114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.910145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.910503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.910536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.910918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.910949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.911302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.911333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.911698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.911729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.912091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.912120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.912365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.912396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.912750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.912780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.913026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.913413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.913444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.913853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.913884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.914242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.914274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.914519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.914551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.914895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.914924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.915284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.915313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.915686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.915723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.916141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.916169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.916551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.916583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.916962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.916992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.917239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.917270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.917503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.917533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.917872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.917901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.918274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.918302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.918665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.918696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.919063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.919092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.919469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.919501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.919864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.919893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.920259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.920288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.920654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.920685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.921041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.921071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.921319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.136 [2024-12-05 12:18:08.921349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.136 qpair failed and we were unable to recover it. 00:34:44.136 [2024-12-05 12:18:08.921703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.921736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.922077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.922106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.922479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.922509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.922907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.922936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.923303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.923332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.923684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.923713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.924118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.924148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.924507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.924538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.924925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.924954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.925305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.925334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.925640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.925670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.926035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.926067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.926424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.926466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.926796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.926826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.927188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.927219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.927576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.927607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.928008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.928037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.928365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.928394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.928771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.928803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.929054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.929083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.929428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.929465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.929835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.929864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.930233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.930262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.930626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.930657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.931025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.931061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.931417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.931446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.931837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.931866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.932237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.932265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.932510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.932541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.932802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.932830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.933164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.933193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.933544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.933574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.933951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.933979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.934381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.934411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.934696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.934731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.137 [2024-12-05 12:18:08.935102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.137 [2024-12-05 12:18:08.935131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.137 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.935390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.935419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.935705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.935735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.935999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.936032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.936397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.936427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.936790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.936820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.937034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.937063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.937441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.937482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.937860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.937889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.938263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.938291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.938663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.938694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.939055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.939084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.939447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.939486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.939712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.939741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.940198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.940226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.940554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.940585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.940969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.941000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.941363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.941392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.941794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.941824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.942183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.942211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.942475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.942505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.942737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.942768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.943133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.943162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.943530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.943560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.943923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.943951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.944324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.944353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.944699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.944728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.944980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.945011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.945366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.945395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.945771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.945808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.946164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.946193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.946439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.946487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.946818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.946847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.947243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.947272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.947634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.947665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.948014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.138 [2024-12-05 12:18:08.948043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.138 qpair failed and we were unable to recover it. 00:34:44.138 [2024-12-05 12:18:08.948407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.948435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.948661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.948690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.949050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.949079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.949451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.949490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.949857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.949886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.950228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.950256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.950660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.950692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.951053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.951082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.951237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.951266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.951509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.951543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.951891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.951920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.952283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.952312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.952663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.952693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.953073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.953102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.953475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.953506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.953887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.953916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.954277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.954306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.954706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.954737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.955086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.955115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.955477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.955508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.955847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.955877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.956131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.956163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.956520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.956550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.956897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.956925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.957059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.957089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.957486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.957516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.957879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.957907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.958157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.958189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.958571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.958602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.958946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.958976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.959335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.959363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.959626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.959657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.960038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.960067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.960431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.960476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.960839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.960868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.961234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.961263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.961612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.961641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.962000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.139 [2024-12-05 12:18:08.962029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.139 qpair failed and we were unable to recover it. 00:34:44.139 [2024-12-05 12:18:08.962394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.962423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.962784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.962814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.963075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.963103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.963448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.963489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.963738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.963767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.964112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.964141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.964484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.964515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.964772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.964801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.965041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.965073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.965493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.965523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.965896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.965925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.966185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.966214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.966572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.966603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.966838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.966870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.967255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.967617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.967648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.968009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.968037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.968374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.968403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.968783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.968814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.969063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.969092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.969470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.969501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.969862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.969891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.970255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.970285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.970655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.970685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.971050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.971079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.971329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.971360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.971596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.971626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.972007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.972036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.972408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.972436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.972720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.972750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.973112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.973141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.973512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.973542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.973904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.973933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.974197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.974226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.974612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.974642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.974989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.975019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.975379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.975409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.140 [2024-12-05 12:18:08.975790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.140 [2024-12-05 12:18:08.975823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.140 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.976219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.976248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.976606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.976636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.977004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.977033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.977406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.977434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.977820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.977850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.978210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.978239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.978593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.978624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.978989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.979018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.979377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.979407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.979786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.979817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.980068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.980100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.980471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.980502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.980844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.980873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.981244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.981273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.981615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.981644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.982016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.982045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.982279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.982307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.982549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.982582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.982952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.982981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.983411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.983440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.983814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.983845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.984215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.984244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.984602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.984631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.985001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.985030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.985393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.985429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.985883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.985912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.986269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.986298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.986661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.986692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.987031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.987060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.987277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.987309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.987695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.987725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.988108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.988137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.988373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.988404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.988760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.988790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.989149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.989179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.989534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.989565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.989937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.989966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.141 [2024-12-05 12:18:08.990229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.141 [2024-12-05 12:18:08.990258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.141 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.990619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.990651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.990845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.990874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.991111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.991143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.991509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.991539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.991901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.991931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.992294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.992322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.992685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.992715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.993079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.993107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.993351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.993383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.993756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.993787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.994145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.994174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.994544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.994575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.994850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.994878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.995232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.995262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.995624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.995655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.996016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.996045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.996411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.996440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.996814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.996843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.997202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.997231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.997572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.997602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.998036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.998065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.998397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.998426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.998818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.998848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.999192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.999221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.999566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.999597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:08.999724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:08.999756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.000105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.000141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.000501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.000532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.000899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.000927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.001293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.001323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.001691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.001721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.002087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.002116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.002325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.002354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.002798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.142 [2024-12-05 12:18:09.002827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.142 qpair failed and we were unable to recover it. 00:34:44.142 [2024-12-05 12:18:09.003189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.003217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.003579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.003610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.003968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.003997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.004354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.004383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.004765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.004796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.005026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.005055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.005431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.005469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.005826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.005855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.006217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.006245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.006615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.006646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.007003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.007032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.007397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.007425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.007789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.007819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.008072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.008105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.008476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.008507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.008906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.008935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.009150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.009182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.009612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.009643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.009983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.010012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.010379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.010409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.010868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.010899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.011230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.011258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.011623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.011654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.012016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.012045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.012409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.012438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.012779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.012809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.013112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.013141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.013514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.013545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.013906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.013935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.014318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.014347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.014738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.014768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.015188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.015218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.015511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.015912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.015941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.016302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.016332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.016685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.016715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.017082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.017112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.017549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.143 [2024-12-05 12:18:09.017580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.143 qpair failed and we were unable to recover it. 00:34:44.143 [2024-12-05 12:18:09.017958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.017986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.018362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.018391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.018757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.018788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.019128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.019157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.019521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.019552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.019918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.019947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.020306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.020334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.020686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.020716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.021081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.021110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.021376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.021405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.021739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.021769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.022145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.022174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.022545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.022576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.023002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.023031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.023385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.023415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.023806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.023836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.024193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.024223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.024580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.024610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.024975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.025004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.025374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.025404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.025746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.025777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.026152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.026183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.026406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.026435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.026786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.026816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.027188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.027219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.027585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.027617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.027985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.028015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.028266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.028297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.028719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.028749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.029114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.029143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.029515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.029545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.029905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.029934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.030156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.030185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.030576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.030605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.030950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.030985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.031353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.144 [2024-12-05 12:18:09.031383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.144 qpair failed and we were unable to recover it. 00:34:44.144 [2024-12-05 12:18:09.031738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.031768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.032133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.032162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.032507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.032537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.032903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.032933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.033297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.033326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.033600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.033630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.033994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.034023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.034397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.034426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.034818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.034848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.035209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.035238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.035602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.035634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.035996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.036024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.036257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.036286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.036665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.036697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.037052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.037082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.037501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.037532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.037922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.037951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.038325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.038354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.038743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.038774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.038907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.038935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.039303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.039332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.039723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.039753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.040094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.040123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.040483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.040514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.040915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.040944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.041313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.041342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.041701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.041732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.041972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.042000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.042354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.042383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.042753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.042784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.043143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.043172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.043532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.043563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.043927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.043957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.044316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.044345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.044687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.145 [2024-12-05 12:18:09.044717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.145 qpair failed and we were unable to recover it. 00:34:44.145 [2024-12-05 12:18:09.045080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.045109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.045480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.045511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.045869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.045898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.046157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.046192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.046668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.046699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.047070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.047100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.047487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.047518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.047896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.047926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.048295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.048324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.048690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.048721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.049077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.049108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.049470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.049501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.049705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.049735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.050132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.050161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.050516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.050547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.050877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.050906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.051256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.051285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.051538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.051570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.051925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.051954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.052312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.052341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.052695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.052725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.052985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.053014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.053374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.053404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.053653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.053686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.054102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.054131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.054377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.054406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.054651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.054681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.055049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.055078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.055333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.055362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.055709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.055739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.055991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.056021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.056367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.056397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.056767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.056797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.057044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.057076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.057432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.057472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.057873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.057902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.058187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.058215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.058660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.146 [2024-12-05 12:18:09.058691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.146 qpair failed and we were unable to recover it. 00:34:44.146 [2024-12-05 12:18:09.058918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.058947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.059316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.059346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.059702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.059733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.060107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.060137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.060565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.060595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.061009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.061045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.061384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.061413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.061776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.061806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.062149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.062177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.062543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.062575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.062942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.062972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.063329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.063358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.063604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.063634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.063983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.064013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.064391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.064420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.064781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.064812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.065224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.065253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.065479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.065510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.065865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.065894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.066258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.066287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.066680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.066711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.067073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.067103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.067489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.067520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.067928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.067959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.068196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.068226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.068580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.068610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.069041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.069070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.069400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.069430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.069795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.069826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.070194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.070224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.070625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.070656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.070995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.071024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.071393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.071423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.147 [2024-12-05 12:18:09.071681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.147 [2024-12-05 12:18:09.071711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.147 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.072074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.072103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.072372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.072404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.072777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.072807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.073009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.073037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.073413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.073442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.073808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.073837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.074225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.074254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.074610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.074641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.075013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.075042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.075409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.075438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.075828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.075859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.076260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.076296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.076644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.076675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.077053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.077416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.077445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.077811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.077842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.078201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.078230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.078578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.078610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.078951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.078980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.079341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.079370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.079665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.079696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.080061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.080090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.080464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.080495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.080890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.080920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.081289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.081318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.081692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.081723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.082085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.082115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.082481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.082512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.082871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.082899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.083274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.083303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.083666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.083697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.084066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.084095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.084464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.084494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.084841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.084870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.085239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.085268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.085626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.085657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.086001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.086031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.148 qpair failed and we were unable to recover it. 00:34:44.148 [2024-12-05 12:18:09.086368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.148 [2024-12-05 12:18:09.086397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.086785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.086816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.087183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.087211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.087580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.087611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.087964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.087993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.088352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.088381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.088742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.088772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.089107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.089136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.089502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.089533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.089900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.089929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.090290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.090319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.090685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.090715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.091085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.091114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.091478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.091509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.091949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.091983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.092324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.092353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.092703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.092734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.093090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.093119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.093358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.093390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.093648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.093678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.094048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.094077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.094483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.094515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.094856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.094885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.095242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.095272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.095639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.095671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.096091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.096119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.096452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.096491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.096864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.096893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.097238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.097267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.097612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.097643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.098000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.098030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.098305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.098333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.098691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.098721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.099082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.099110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.099484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.099515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.099864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.099893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.100257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.100286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.100682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.149 [2024-12-05 12:18:09.100712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.149 qpair failed and we were unable to recover it. 00:34:44.149 [2024-12-05 12:18:09.101075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.101104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.101473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.101505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.101858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.101887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.102271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.102301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.102558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.102591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.102939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.102968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.103374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.103628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.103659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.104010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.104039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.104381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.104410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.104773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.104805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.105242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.105271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.105627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.105657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.106017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.106045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.106409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.106438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.106823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.106853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.107215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.107251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.107612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.107644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.108015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.108044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.108411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.108440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.108829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.108859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.109108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.109137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.109475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.109505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.109873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.109902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.110266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.110295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.110721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.110752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.111109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.111139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.111505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.111537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.111871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.111900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.112267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.112297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.112692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.112724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.113063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.113092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.113356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.113385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.113759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.113789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.114036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.114065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.114422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.114451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.114691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.150 [2024-12-05 12:18:09.114721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.150 qpair failed and we were unable to recover it. 00:34:44.150 [2024-12-05 12:18:09.115068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.115097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.115387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.115415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.115659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.115693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.116049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.116078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.116440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.116482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.116838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.116867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.117229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.117259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.117632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.117663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.118061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.118474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.118504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.118856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.118884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.119314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.119343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.119711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.119741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.119999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.120029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.120293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.120323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.120696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.120726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.121077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.121107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.121475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.121506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.121881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.121910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.122271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.122306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.122642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.122672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.123083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.123112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.123482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.123513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.123855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.123884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.124230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.124259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.124599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.124629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.124878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.124910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.125266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.125295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.125583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.125613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.125998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.126027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.126367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.126396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.126756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.126786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.127151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.127181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.151 qpair failed and we were unable to recover it. 00:34:44.151 [2024-12-05 12:18:09.127430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.151 [2024-12-05 12:18:09.127467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.127813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.127842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.128203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.128231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.128595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.128626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.128975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.129004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.129214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.129243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.129606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.129637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.129894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.129923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.130293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.130322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.130687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.130718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.131085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.131114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.131362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.131393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.131831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.131862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.132209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.132239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.132587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.132618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.132908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.132938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.133247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.133278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.133644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.133675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.134093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.134122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.134478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.134509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.134905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.134935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.135197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.135226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.135584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.135614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.135835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.135867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.136119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.136149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.136499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.136529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.136904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.136939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.137296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.137325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.137731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.137761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.138170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.138198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.138560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.138590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.138963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.138993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.139363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.139393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.139750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.139781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.139991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.140023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.140385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.140416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.140795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.140826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.141198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.152 [2024-12-05 12:18:09.141227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.152 qpair failed and we were unable to recover it. 00:34:44.152 [2024-12-05 12:18:09.141591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.141621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.141990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.142019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.142423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.142465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.142629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.142661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.143035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.143064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.143434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.143485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.143839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.143868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.144232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.144261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.144608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.144639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.144990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.145020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.145346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.145375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.145733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.145764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.146125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.146154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.146525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.146556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.146936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.146966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.147332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.147362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.147706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.147736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.147991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.148019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.148369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.148400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.148766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.148797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.149152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.149181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.149546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.149576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.149957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.149987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.150347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.150376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.150722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.150752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.151114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.151143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.151516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.151547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.151956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.151985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.152355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.152390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.152738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.152770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.153142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.153171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.153591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.153623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.153981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.154011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.154370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.154400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.154769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.154799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.155203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.155232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.155574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.153 [2024-12-05 12:18:09.155604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.153 qpair failed and we were unable to recover it. 00:34:44.153 [2024-12-05 12:18:09.155968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.155998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.156363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.156392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.156823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.156854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.157185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.157214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.157580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.157613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.158048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.158078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.158417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.158446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.158794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.158824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.159191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.159220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.159487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.159518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.159874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.159904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.160267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.160295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.160643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.160673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.161035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.161066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.161325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.161354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.161700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.161730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.162090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.162120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.162376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.162404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.162764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.162795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.163190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.163220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.163476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.163507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.163798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.163827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.164153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.164183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.164544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.164576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.164941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.164970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.165337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.165367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.165757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.165789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.166136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.166165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.166540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.166570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.166931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.166961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.167325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.167354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.167717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.167748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.168110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.168139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.154 qpair failed and we were unable to recover it. 00:34:44.154 [2024-12-05 12:18:09.168285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.154 [2024-12-05 12:18:09.168317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.168786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.168817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.169188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.169218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.169687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.169718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.169957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.169987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.170235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.170266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.170580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.170610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.155 [2024-12-05 12:18:09.170954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.155 [2024-12-05 12:18:09.170984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.155 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.171389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.171421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.171813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.171843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.172189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.172218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.172429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.172469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.172838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.172867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.173220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.173250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.173623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.173656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.174924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.425 qpair failed and we were unable to recover it. 00:34:44.425 [2024-12-05 12:18:09.175312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.425 [2024-12-05 12:18:09.175343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.175733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.175765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.176129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.176159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.176514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.176545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.176858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.176888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.177247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.177276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.177532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.177566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.177810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.177844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.178275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.178305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.178744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.178788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.179173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.179203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.179585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.179615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.179953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.179981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.180347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.180377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.180744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.180773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.181134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.181164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.181529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.181561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.181919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.181947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.182364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.182394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.182770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.182802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.183163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.183193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.183559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.183591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.183849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.183881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.184252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.184282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.184644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.184674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.185028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.185057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.185422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.185451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.185848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.185878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.186243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.186272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.426 [2024-12-05 12:18:09.186650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.426 [2024-12-05 12:18:09.186681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.426 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.186897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.186927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.187286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.187314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.187771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.187801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.188135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.188164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.188527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.188557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.188915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.188944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.189310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.189340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.189495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.189525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.189915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.189945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.190309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.190340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.190704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.190736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.191094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.191123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.191366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.191396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.191798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.191828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.192178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.192207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.192577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.192607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.192973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.193002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.193232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.193260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.193628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.193659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.194037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.194073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.194429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.194473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.194823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.194852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.195210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.195240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.195579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.195611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.195955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.195985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.196201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.196232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.196587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.196618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.427 [2024-12-05 12:18:09.196992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.427 [2024-12-05 12:18:09.197022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.427 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.197268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.197644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.197674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.198072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.198101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.198475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.198505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.198940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.198970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.199336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.199365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.199742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.199774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.200133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.200162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.200535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.200566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.200937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.200966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.201329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.201358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.201698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.201729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.202087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.202117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.202475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.202506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.202865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.202894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.203258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.203288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.203630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.203662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.204032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.204061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.204426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.204476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.204705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.204737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.205099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.205128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.205491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.205522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.205871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.205902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.206226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.206255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.206609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.206641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.206878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.206908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.207016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.207045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.428 qpair failed and we were unable to recover it. 00:34:44.428 [2024-12-05 12:18:09.207450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.428 [2024-12-05 12:18:09.207494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.207833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.207863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.208228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.208260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.208614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.208646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.209011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.209063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.209433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.209472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.209810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.209840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.210204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.210234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.210577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.210608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.210976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.211006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.211370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.211400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.211762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.211793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.212195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.212224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.212491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.212523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.212904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.212935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.213301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.213331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.213587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.213618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.213965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.213995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.214358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.214389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.214727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.214758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.215012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.215043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.215296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.215326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.215700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.215731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.216088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.216118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.216366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.216396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.216839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.216869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.217262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.217291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.217707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.217738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.218079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.218108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.218341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.429 [2024-12-05 12:18:09.218370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.429 qpair failed and we were unable to recover it. 00:34:44.429 [2024-12-05 12:18:09.218647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.218679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.219045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.219074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.219436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.219475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.219829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.219859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.220235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.220264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.220515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.220547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.220976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.221008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.221254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.221284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.221641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.221673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.222043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.222072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.222324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.222353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.222699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.222729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.223096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.223124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.223490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.223520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.223790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.223829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.224075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.224105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.224468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.224500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.224902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.224932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.225306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.225335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.225700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.225730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.225977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.226009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.226347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.226376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.226617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.226647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.227023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.227054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.227420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.227449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.227866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.227896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.228263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.228292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.228543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.228574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.430 [2024-12-05 12:18:09.229011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.430 [2024-12-05 12:18:09.229041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.430 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.229407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.229436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.229813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.229844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.230093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.230122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.230492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.230524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.230871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.230899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.231263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.231292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.231536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.231566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.231931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.231961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.232319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.232348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.232694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.232724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.233094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.233124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.233485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.233515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.233868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.233898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.234240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.234269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.234629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.234661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.235002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.235031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.235405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.235435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.235666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.235697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.236064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.236093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.236480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.236511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.236891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.236921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.237282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.237311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.237726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.237758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.431 [2024-12-05 12:18:09.238101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.431 [2024-12-05 12:18:09.238130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.431 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.238492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.238523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.238784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.238819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.239210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.239238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.239503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.239533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.239883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.239912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.240264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.240293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.240657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.240688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.240941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.240971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.241338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.241367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.241715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.241746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.242117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.242147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.242502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.242533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.242814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.242842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.243201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.243230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.243576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.243606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.243974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.244004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.244236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.244268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.244624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.244654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.244930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.244959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.245317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.245346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.245702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.245734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.245991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.246023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.246386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.246416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.246779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.246810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.247176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.247205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.247567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.247598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.247946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.247975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.248334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.248364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.432 qpair failed and we were unable to recover it. 00:34:44.432 [2024-12-05 12:18:09.248623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.432 [2024-12-05 12:18:09.248655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.248980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.249009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.249357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.249386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.249636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.249669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.250026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.250055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.250310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.250339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.250596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.250627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.250978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.251007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.251375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.251404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.251767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.251797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.252146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.252175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.252583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.252615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.252953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.252982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.253340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.253377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.253755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.253789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.254152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.254182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.254486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.254517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.254868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.254897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.255258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.255287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.255651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.255681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.256055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.256084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.256446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.256490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.256842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.256872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.257233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.257262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.257523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.257555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.257946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.257975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.258338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.258368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.433 [2024-12-05 12:18:09.258707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.433 [2024-12-05 12:18:09.258739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.433 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.259108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.259137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.259507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.259537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.259797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.259825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.260207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.260236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.260686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.260717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.261067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.261096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.261450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.261503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.261692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.261727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.262096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.262125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.262488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.262520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.262864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.262894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.263146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.263175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.263519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.263550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.263842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.263870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.264217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.264245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.264599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.264629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.265007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.265037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.265397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.265427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.265783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.265813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.266101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.266132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.266501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.266533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.266771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.266802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.267174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.267204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.267558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.267588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.267954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.267983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.268231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.268269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.268617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.268648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.269010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.269040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.434 [2024-12-05 12:18:09.269416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.434 [2024-12-05 12:18:09.269446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.434 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.269860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.269890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.270243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.270272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.270634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.270665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.271032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.271061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.271413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.271443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.271824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.271854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.272224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.272254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.272626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.272656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.273027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.273056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.273297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.273326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.273712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.273743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.274113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.274142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.274500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.274551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.274929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.274960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.275315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.275344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.275579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.275609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.275972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.276001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.276365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.276393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.276808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.276839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.277243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.277271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.277596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.277626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.277999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.278029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.278389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.278418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.278861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.278894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.279228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.279257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.279639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.279671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.280101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.280129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.280508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.280538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.435 [2024-12-05 12:18:09.280879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.435 [2024-12-05 12:18:09.280908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.435 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.281200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.281229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.281590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.281620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.281978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.282007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.282306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.282335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.282693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.282724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.283084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.283114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.283480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.283511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.283868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.283909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.284247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.284277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.284660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.284692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.285091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.285120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.285499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.285530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.285786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.285817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.286144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.286174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.286526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.286558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.286935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.286963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.287326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.287354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.287740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.287771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.288185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.288214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.288591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.288620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.288961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.288990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.289350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.289379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.289636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.289666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.290015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.290044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.290276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.290308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.290639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.290670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.291036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.291066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.291431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.291494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.291826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.291856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.436 qpair failed and we were unable to recover it. 00:34:44.436 [2024-12-05 12:18:09.292254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.436 [2024-12-05 12:18:09.292287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.292640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.292671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.293022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.293051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.293400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.293429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.293788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.293817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.294050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.294084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.294446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.294487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.294827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.294855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.295218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.295248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.295617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.295648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.296009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.296039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.296478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.296510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.296864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.296893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.297260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.297289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.297734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.297765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.297961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.297994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.298386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.298415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.298778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.298810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.299043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.299079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.299443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.299485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.299887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.299916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.300259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.300289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.300654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.300685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.301049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.301078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.301434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.437 [2024-12-05 12:18:09.301477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.437 qpair failed and we were unable to recover it. 00:34:44.437 [2024-12-05 12:18:09.301813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.301843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.302217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.302246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.302588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.302618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.302871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.302902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.303256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.303284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.303625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.303656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.304024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.304055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.304428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.304479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.304844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.304874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.305232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.305261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.305638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.305669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.306032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.306062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.306299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.306332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.306689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.306720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.307139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.307167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.307527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.307557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.307940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.307970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1553513 Killed "${NVMF_APP[@]}" "$@" 00:34:44.438 [2024-12-05 12:18:09.308350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.308380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.308719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.308750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:44.438 [2024-12-05 12:18:09.309126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.309157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:44.438 [2024-12-05 12:18:09.309537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.309568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.309759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.309788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:44.438 [2024-12-05 12:18:09.310181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.310213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.438 [2024-12-05 12:18:09.310586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.310618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.310859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.310893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.311140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.311170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.438 [2024-12-05 12:18:09.311413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.438 [2024-12-05 12:18:09.311442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.438 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.311802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.311833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.312170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.312199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.312603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.313016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.313046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.313305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.313335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.313684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.313718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.313969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.313998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.314360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.314390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.314800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.314833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.315040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.315075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.315349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.315380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.315733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.315764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.316141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.316172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.316501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.316532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.316877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.316909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.317281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.317311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.317700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.317732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 [2024-12-05 12:18:09.318116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.318148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1554624 00:34:44.439 [2024-12-05 12:18:09.318549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.318592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1554624 00:34:44.439 [2024-12-05 12:18:09.318972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.319005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1554624 ']' 00:34:44.439 [2024-12-05 12:18:09.319364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.319396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.439 [2024-12-05 12:18:09.319656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.319689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:44.439 [2024-12-05 12:18:09.320037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.320069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b9 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.439 0 with addr=10.0.0.2, port=4420 00:34:44.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:44.439 [2024-12-05 12:18:09.320426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.439 [2024-12-05 12:18:09.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.439 qpair failed and we were unable to recover it. 00:34:44.439 12:18:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:44.440 [2024-12-05 12:18:09.321555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.321609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.321990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.322025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.322396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.322428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.322812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.322847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.323208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.323239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.323600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.323633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.324020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.324051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.324414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.324446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.324872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.324905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.325295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.325326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.325690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.325723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.325973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.326004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.326370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.326400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.326690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.326722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.326986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.327017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.327384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.327417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.327813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.327846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.328281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.328312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.328650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.328682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.329045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.329076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.329448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.329490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.329760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.329791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.330168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.330199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.330496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.330527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.330782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.330813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.331176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.331207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.331578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.331609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.331963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.331993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.440 [2024-12-05 12:18:09.332342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.440 [2024-12-05 12:18:09.332378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.440 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.332770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.332802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.333181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.333211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.333582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.333616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.334011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.334042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.334410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.334440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.334888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.334919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.335278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.335308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.335579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.335610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.335951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.335982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.336323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.336352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.336600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.336634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.337030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.337060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.337427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.337465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.337961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.337991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.338357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.338388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.338656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.338691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.339087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.339118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.339426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.339479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.339845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.339876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.340234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.340265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.340543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.340575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.340937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.340968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.341322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.341353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.341755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.341786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.342142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.342171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.342429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.342469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.342887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.342919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.343257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.441 [2024-12-05 12:18:09.343286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.441 qpair failed and we were unable to recover it. 00:34:44.441 [2024-12-05 12:18:09.343628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.343659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.344053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.344083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.344450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.344507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.344769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.344802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.345046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.345076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.345505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.345537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.345962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.345992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.346396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.346427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.346778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.346809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.347176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.347206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.347486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.347517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.347935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.347983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.348359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.348389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.348758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.348789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.349159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.349189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.349518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.349549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.349948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.349980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.350345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.350374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.350776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.350807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.351218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.351247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.351608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.351638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.352016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.352045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.352298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.352327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.352745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.352776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.353143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.353172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.353429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.353472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.353761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.353793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.354032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.354065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.354434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.442 [2024-12-05 12:18:09.354477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.442 qpair failed and we were unable to recover it. 00:34:44.442 [2024-12-05 12:18:09.354889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.354918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.355280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.355309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.355713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.355744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.356009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.356039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.356398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.356426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.356858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.356889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.357135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.357165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.357556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.357587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.357969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.357998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.358247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.358277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.358592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.358623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.359002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.359031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.359433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.359475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.359935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.359965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.360336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.360365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.360616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.360648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.361063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.361093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.361464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.443 [2024-12-05 12:18:09.361495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.443 qpair failed and we were unable to recover it. 00:34:44.443 [2024-12-05 12:18:09.361853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.361882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.362245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.362274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.362552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.362583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.362969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.362998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.363364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.363399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.363774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.363805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.364240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.364270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.364617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.364648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.365020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.365049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.365488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.365520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.365872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.365902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.366263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.366293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.366645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.366676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.367023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.367052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.367305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.367337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.367755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.367786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.368144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.368174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.368552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.368584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.368993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.369022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.369301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.369333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.444 qpair failed and we were unable to recover it. 00:34:44.444 [2024-12-05 12:18:09.369645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.444 [2024-12-05 12:18:09.369676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.370035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.370064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.370440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.370481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.370849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.370879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.371255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.371286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.371643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.371674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.372050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.372079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.372451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.372506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.372931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.372960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.373328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.373356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.373772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.373803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.374160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.374192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.374422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.374467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.374853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.374883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.375096] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:34:44.445 [2024-12-05 12:18:09.375167] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.445 [2024-12-05 12:18:09.375327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.375359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.375691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.375722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.375963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.375992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.376369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.376400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.376837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.376869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.377204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.377235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.377607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.377640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.378018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.378050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.378334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.378364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.378727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.378766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.379159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.379191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.379607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.379640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.379995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.380025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.380425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.445 [2024-12-05 12:18:09.380476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.445 qpair failed and we were unable to recover it. 00:34:44.445 [2024-12-05 12:18:09.380827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.380857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.381217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.381247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.381622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.381655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.382012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.382043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.382396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.382426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.382679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.382710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.383168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.383200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.383550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.383582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.383935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.383966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.384365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.384396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.384781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.384812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.385196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.385226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.385538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.385571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.385942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.385973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.386407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.386438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.386693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.386727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.386989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.387020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.387392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.387423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.387845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.387876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.388122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.388153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.388540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.388573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.388805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.388835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.389201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.389232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.389554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.389586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.389981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.390011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.390443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.390487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.390776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.390806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.391057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.391090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.391535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.391569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.391898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.446 [2024-12-05 12:18:09.391928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.446 qpair failed and we were unable to recover it. 00:34:44.446 [2024-12-05 12:18:09.392297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.392329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.392702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.392734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.393022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.393053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.393254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.393285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.393654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.393687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.394053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.394089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.394330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.394360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.394681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.394714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.395066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.395096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.395481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.395512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.395895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.395926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.396289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.396320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.396608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.396641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.396918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.396948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.397310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.397341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.397693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.397725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.398065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.398096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.398355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.398386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.398744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.398775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.399114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.399145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.399513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.399544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.399938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.399968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.400424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.400464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.400838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.400870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.401236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.401267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.401662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.401694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.402055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.402085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.402448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.402490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.447 [2024-12-05 12:18:09.402899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.447 [2024-12-05 12:18:09.402929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.447 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.403300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.403331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.403693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.403723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.404070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.404100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.404453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.404494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.404848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.404877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.405253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.405282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.405732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.405764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.406135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.406165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.406539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.406571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.406959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.406990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.407349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.407380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.407626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.407657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.408020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.408050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.408424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.408465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.408819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.408849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.409087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.409118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.409493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.409531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.409878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.409908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.410265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.410295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.410638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.410670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.411039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.411069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.411320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.411349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.411748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.411778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.412138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.412167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.412541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.412571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.412920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.412949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.413197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.413227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.413623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.413653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.414018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.414047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.448 [2024-12-05 12:18:09.414368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.448 [2024-12-05 12:18:09.414398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.448 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.414671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.414702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.415058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.415087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.415492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.415523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.415884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.415912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.416287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.416677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.416709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.417053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.417083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.417348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.417377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.417762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.417793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.418147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.418177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.418563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.418594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.418938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.418967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.419338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.419368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.419729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.419761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.420102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.420130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.420412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.420441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.420834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.420864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.421157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.421186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.421448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.421504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.421892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.421922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.422283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.422313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.422578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.422609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.422976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.423007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.423434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.423477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.423828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.423857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.424216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.424246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.424517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.424559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.424918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.449 [2024-12-05 12:18:09.424948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.449 qpair failed and we were unable to recover it. 00:34:44.449 [2024-12-05 12:18:09.425256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.425286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.425509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.425542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.425923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.425952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.426319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.426348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.426783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.426814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.427171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.427201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.427561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.427592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.427844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.427873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.428251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.428282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.428642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.428672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.429053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.429083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.429329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.429362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.429741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.429772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.430111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.430142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.430382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.430412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.430880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.430911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.431142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.431172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.431414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.431444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.431730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.431761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.432126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.432402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.432432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.432790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.432822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.433163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.433192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.433552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.433583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.450 [2024-12-05 12:18:09.433793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.450 [2024-12-05 12:18:09.433827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.450 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.434197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.434229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.434581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.434612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.435021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.435051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.435284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.435315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.435674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.435705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.436069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.436100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.436467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.436497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.436852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.436883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.437239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.437269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.437615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.437647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.437909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.437940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.438175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.438207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.438586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.438618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.439002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.439039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.439378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.439408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.439768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.439800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.440158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.440188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.440532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.440563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.440912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.440942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.441306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.441341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.441550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.441585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.441963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.441994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.442424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.442464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.442833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.442864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.443171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.443200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.443579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.443609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.443978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.444008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.451 qpair failed and we were unable to recover it. 00:34:44.451 [2024-12-05 12:18:09.444250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.451 [2024-12-05 12:18:09.444279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.444699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.444729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.445094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.445124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.445548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.445580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.445950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.445980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.446348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.446378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.446751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.446782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.447032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.447064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.447184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.447216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.447596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.447627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.448000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.448031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.448426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.448466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.448837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.448868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.449230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.449260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.449608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.449639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.450015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.450047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.450407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.450436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.450699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.450734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.451166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.451196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.451436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.451481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.451847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.451877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.452247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.452277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.452638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.452671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.453035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.453064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.453426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.453472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.453867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.453901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.454161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.454198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.454573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.454605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.454827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.454861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.455222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.452 [2024-12-05 12:18:09.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.452 qpair failed and we were unable to recover it. 00:34:44.452 [2024-12-05 12:18:09.455607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.455637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.456030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.456060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.456399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.456430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.456706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.456737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.457143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.457511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.457542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.457909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.457939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.458317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.458347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.458598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.458630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.458885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.458915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.459304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.459334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.459678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.459709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.460077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.460107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.460476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.460508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.460865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.460896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.461260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.461290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.461646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.461679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.461927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.461957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.462351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.462380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.462721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.462752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.463010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.463040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.463287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.463317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.463584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.463614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.463966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.463997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.464337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.464367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.464623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.464656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.465010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.453 [2024-12-05 12:18:09.465040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.453 qpair failed and we were unable to recover it. 00:34:44.453 [2024-12-05 12:18:09.465410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.465441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.465684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.465717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.466082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.466113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.466483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.466515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.466879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.466909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.467340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.467369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.467727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.467758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.468120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.468151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.468531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.468562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.468931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.468967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.469193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.469225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.469625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.469657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.470076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.470105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.470478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.470509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.470864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.470894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.471258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.471287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.471686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.471718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.472154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.472184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.472549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.472581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.472842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.472874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.473229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.473260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.473608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.473639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.474002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.474031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.474404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.474434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.474786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.474817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.475191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.475222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.475478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.475509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.475852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.475882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.476269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.476299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.476541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.476573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.476994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.477024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.477389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.477420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.477695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.477728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.478063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.478094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.478341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.478372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.478747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.478779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.479143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.479173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.479543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.479574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.479950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.479982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.724 [2024-12-05 12:18:09.480339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.724 [2024-12-05 12:18:09.480370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.724 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.480768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.480802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.481156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.481187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.481304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.481334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.481792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.481823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.482184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.482215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.482400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:44.725 [2024-12-05 12:18:09.482472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.482504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.482858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.482887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.483233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.483265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.483504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.483535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.483913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.483944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.484305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.484336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.484683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.484715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.485128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.485158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.485518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.485549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.485941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.485970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.486387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.486416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.486704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.486734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.486954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.486983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.487349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.487379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.487714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.487745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.488109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.488139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.488501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.488533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.488929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.488960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.489335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.489364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.489801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.489832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.490070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.490102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.490467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.490498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.490884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.490913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.491276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.491305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.491651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.491682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.491930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.491960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.492329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.492359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.492727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.492757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.493120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.493149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.493390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.493423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.493849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.493881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.494248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.494279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.494637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.494668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.495040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.495069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.495222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.495250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.495627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.495658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.496051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.496079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.496452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.496498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.725 [2024-12-05 12:18:09.496733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.725 [2024-12-05 12:18:09.496766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.725 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.497106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.497135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.497496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.497527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.497927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.497958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.498323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.498353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.498762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.498792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.499165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.499208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.499624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.499655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.500032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.500062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.500419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.500450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.500732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.500763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.501132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.501161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.501521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.501553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.501913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.501943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.502300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.502328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.502679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.502710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.503079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.503108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.503354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.503385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.503755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.503786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.504154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.504184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.504576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.504607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.504964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.504993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.505363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.505392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.505738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.505769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.506149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.506180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.506555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.506587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.507017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.507050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.507389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.507419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.507700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.507733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.508083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.508113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.508471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.508502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.508862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.508892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.509234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.509264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.509602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.509633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.510077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.510107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.510490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.510521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.510900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.510929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.511194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.511223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.511450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.511491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.511797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.511827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.512183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.512214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.512582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.726 [2024-12-05 12:18:09.512613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.726 qpair failed and we were unable to recover it. 00:34:44.726 [2024-12-05 12:18:09.512989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.513021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.513380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.513411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.513826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.513856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.514223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.514253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.514498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.514536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.514921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.514951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.515241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.515270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.515631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.515663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.516030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.516060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.516427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.516467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.516812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.516841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.517249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.517280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.517614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.517644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.518020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.518051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.518404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.518434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.518812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.518842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.519103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.519132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.519377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.519406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.519824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.519855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.520206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.520237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.520605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.520637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.520891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.520923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.521323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.521354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.521589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.521620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.521889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.521921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.522312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.522342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.522688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.522719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.523098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.523128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.523475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.523506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.523862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.523892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.524109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.524139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.524524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.524556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.524924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.524954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.525315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.525345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.525699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.525730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.525979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.526010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.526362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.526392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.526760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.526790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.527165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.527194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.527433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.527469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.527845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.527874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.528231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.528259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.528718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.528749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.727 qpair failed and we were unable to recover it. 00:34:44.727 [2024-12-05 12:18:09.529090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.727 [2024-12-05 12:18:09.529119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.529329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.529366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.529750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.529781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.530023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.530055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.530475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.530507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.530864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.530895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.531273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.531304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.531665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.531699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.532043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.532073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.532309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.532338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.532567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.532600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.532923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.532952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.533322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.533352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.533778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.533810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.534152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.534182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.534558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.534589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 [2024-12-05 12:18:09.534574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.534624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.728 [2024-12-05 12:18:09.534633] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.728 [2024-12-05 12:18:09.534640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.728 [2024-12-05 12:18:09.534647] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.728 [2024-12-05 12:18:09.534957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.534986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.535384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.535413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.535752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.535784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.536038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.536069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.536422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.536451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.536687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.536717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.536937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:34:44.728 [2024-12-05 12:18:09.537071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.537101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.537153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:34:44.728 [2024-12-05 12:18:09.537312] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:34:44.728 [2024-12-05 12:18:09.537399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.537427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 [2024-12-05 12:18:09.537313] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.537710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.537742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.538101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.538132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.728 [2024-12-05 12:18:09.538447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.728 [2024-12-05 12:18:09.538504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.728 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.538798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.538828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.539205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.539235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.539602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.539633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.540010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.540040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.540413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.540442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.540818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.540848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.541094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.541123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.541475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.541506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.541860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.541889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.542254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.542284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.542493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.542525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.542897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.542928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.543188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.543216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.543494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.543526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.543896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.543925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.544353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.544383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.544744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.544774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.545143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.545172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.545437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.545476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.545827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.545856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.546223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.546253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.546624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.546656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.546933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.546962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.547191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.547219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.547589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.547626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.547977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.548005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.548376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.548406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.548778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.548809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.549055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.549085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.549479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.549511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.549880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.549910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.550172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.550204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.550537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.550569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.550954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.550985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.551349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.551378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.551731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.551763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.552014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.552044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.552423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.552452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.552842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.552872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.553164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.553193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.553389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.553417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.553797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.553828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.554194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.554224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.729 [2024-12-05 12:18:09.554600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.729 [2024-12-05 12:18:09.554630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.729 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.555012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.555042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.555308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.555337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.555574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.555606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.555872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.555905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.556315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.556345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.556686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.556718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.557087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.557120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.557364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.557396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.557770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.557806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.558153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.558184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.558491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.558521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.558765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.558795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.559159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.559189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.559565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.559596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.559966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.559997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.560340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.560370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.560712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.560742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.561121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.561150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.561414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.561444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.561881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.561911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.562275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.562312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.562564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.562597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.562948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.562978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.563343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.563374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.563643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.563674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.564007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.564038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.564416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.564447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.564711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.564741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.565094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.565125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.565483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.565515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.565953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.565982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.566341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.566370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.566763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.566795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.567240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.567271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.567640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.567672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.568048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.568077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.568447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.568488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.568885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.568915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.569170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.569200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.569561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.569592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.569978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.570009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.570351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.570385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.730 qpair failed and we were unable to recover it. 00:34:44.730 [2024-12-05 12:18:09.570752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.730 [2024-12-05 12:18:09.570783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.571038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.571072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.571300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.571331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.571700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.571731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.572101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.572131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.572577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.572609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.572842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.572871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.573237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.573267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.573496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.573526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.573908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.573939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.574365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.574396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.574771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.574802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.575064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.575095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.575482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.575514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.575753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.575784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.576135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.576168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.576583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.576615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.576969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.576999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.577381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.577421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.577821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.577854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.578222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.578253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.578693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.578725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.579070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.579100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.579320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.579349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.579719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.579751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.580122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.580152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.580523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.580554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.580938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.580967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.581213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.581245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.581477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.581508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.581903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.581934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.582336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.582366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.582642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.582672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.583039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.583069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.583333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.583366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.583802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.583833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.584195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.584225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.584577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.584607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.584979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.585008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.585381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.585409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.585833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.585864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.586194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.731 [2024-12-05 12:18:09.586225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.731 qpair failed and we were unable to recover it. 00:34:44.731 [2024-12-05 12:18:09.586610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.586643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.586900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.586930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.587289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.587318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.587679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.587710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.588017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.588047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.588270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.588301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.588548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.588584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.588888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.588918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.589291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.589322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.589687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.589718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.590066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.590096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.590437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.590477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.590827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.590858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.591223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.591253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.591476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.591507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.591880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.591911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.592132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.592171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.592534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.592565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.592673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.592703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.592988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.593017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.593371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.593401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.593686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.593718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.593945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.593975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.594239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.594270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.594631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.594662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.595029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.595058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.595318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.595349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.595599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.595629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.596013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.596044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.596289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.596318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.596698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.596729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.597103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.597132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.597520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.597551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.597764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.597794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.598169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.598198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.598590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.598621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.598858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.598887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.599171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.599201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.732 qpair failed and we were unable to recover it. 00:34:44.732 [2024-12-05 12:18:09.599554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.732 [2024-12-05 12:18:09.599585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.599946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.599976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.600346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.600376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.600765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.600796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.601172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.601200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.601532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.601564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.602010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.602041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.602409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.602438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.602814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.602845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.603204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.603234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.603465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.603499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.603767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.603799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.604164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.604194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.604560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.604591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.604975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.605004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.605268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.605298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.605639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.605671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.605942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.605971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.606316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.606345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.606633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.606664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.607052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.607081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.607480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.607511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.607740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.607769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.608219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.608250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.608600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.608631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.608860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.608893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.609088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.609366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.609396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.609813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.609844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.610089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.610121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.610346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.610377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.610755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.610785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.611156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.611185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.611421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.611451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.611713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.611743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.612130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.612159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.612520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.612551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.612796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.612824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.613178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.613207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.613427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.613467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.613820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.613850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.614209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.614238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.614498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.614532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.733 qpair failed and we were unable to recover it. 00:34:44.733 [2024-12-05 12:18:09.614761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.733 [2024-12-05 12:18:09.614790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.615159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.615189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.615412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.615449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.615814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.615847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.616217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.616249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.616496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.616531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.616905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.616934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.617301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.617331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.617690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.617721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.618090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.618119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.618484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.618514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.618758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.618788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.619142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.619170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.619369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.619398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.619843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.619873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.620217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.620247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.620601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.620632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.621005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.621035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.621281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.621313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.621554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.621584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.621927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.621958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.622085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.622114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.622489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.622521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.622907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.622937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.623343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.623372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.623524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.623555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.623930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.623960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.624327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.624357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.624696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.624727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.625099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.625130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.625513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.625543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.625917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.625947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.626322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.626352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.626583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.626616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.627006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.627036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.627413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.627444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.627824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.627854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.628193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.628222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.628589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.628622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.628990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.629019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.629398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.629430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.629656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.629688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.629936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.629981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.630343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.630372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.734 qpair failed and we were unable to recover it. 00:34:44.734 [2024-12-05 12:18:09.630711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.734 [2024-12-05 12:18:09.630742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.630984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.631013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.631390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.631418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.631790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.631820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.632041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.632071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.632407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.632436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.632804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.632835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.633081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.633111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.633445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.633488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.633887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.633917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.634284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.634314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.634698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.634730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.635169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.635199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.635543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.635574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.635921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.635950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.636325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.636354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.636712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.636742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.637115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.637145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.637359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.637389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.637766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.637797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.638021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.638050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.638275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.638304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.638551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.638582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.638797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.638826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.639086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.639114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.639479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.639510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.639759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.639791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.640019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.640048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.640374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.640404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.640744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.640774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.640997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.641026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.641387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.641417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.641745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.641776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.642147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.642176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.642572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.642947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.642975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.643342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.643370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.643610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.643641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.643997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.644038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.644413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.644443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.644768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.644798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.735 qpair failed and we were unable to recover it. 00:34:44.735 [2024-12-05 12:18:09.645247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.735 [2024-12-05 12:18:09.645275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.645564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.645594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.645972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.646001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.646259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.646287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.646644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.646675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.646939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.646968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.647330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.647359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.647717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.647748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.648097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.648126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.648488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.648518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.648927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.648956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.649326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.649355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.649722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.649752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.650151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.650180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.650551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.650583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.650825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.650854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.651098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.651129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.651499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.651530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.651903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.651932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.652295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.652324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.652548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.652581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.652798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.652827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.653209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.653237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.653613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.653644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.654000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.654028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.654394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.654422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.654768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.654799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.655164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.655192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.655430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.655484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.655834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.655863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.656227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.656256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.656668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.656699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.656983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.657012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.657388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.657417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.657802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.657832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.658070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.658098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.658306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.736 [2024-12-05 12:18:09.658335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.736 qpair failed and we were unable to recover it. 00:34:44.736 [2024-12-05 12:18:09.658719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.658756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.658994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.659024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.659120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.659149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.659219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8c9e10 (9): Bad file descriptor 00:34:44.737 [2024-12-05 12:18:09.660065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.660190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.660723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.660828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.661284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.661321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.661733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.661838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.662294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.662332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.662715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.662751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.663046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.663079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.663322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.663351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.663720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.663754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.664098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.664127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.664508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.664542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.664972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.665002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.665250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.665278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.665681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.665714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.666092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.666124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.666332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.666361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.666753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.666784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.667158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.667188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.667615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.667648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.668018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.668050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.668261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.668294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.668644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.668675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.669020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.669050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.669264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.669303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.669679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.669711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.670069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.670098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.670474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.670505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.670885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.670915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.671277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.671307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.671715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.671746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.671981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.672010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.672369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.672399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.672665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.672695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.673096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.673126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.673472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.673503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.673725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.673754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.737 qpair failed and we were unable to recover it. 00:34:44.737 [2024-12-05 12:18:09.673953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.737 [2024-12-05 12:18:09.673982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.674376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.674406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.674712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.674744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.674987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.675017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.675238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.675268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.675533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.675565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.675905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.675934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.676304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.676334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.676686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.676718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.677123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.677153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.677407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.677441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.677801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.677832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.678088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.678121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.678492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.678523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.678887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.678917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.679285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.679315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.679689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.679721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.680093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.680121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.680369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.680399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.680769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.680801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.681166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.681194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.681610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.681641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.681848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.681877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.682252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.682280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.682628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.682658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.682873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.682903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.683301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.683330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.683553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.683584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.683957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.683986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.684371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.684401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.684774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.684804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.685188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.685217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.685607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.685638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.685999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.686028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.686399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.686429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.686816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.686848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.687216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.687244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.687489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.687520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.687889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.687918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.688151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.688182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.688557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.688587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.688969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.688999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.689374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.689403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.689817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.738 [2024-12-05 12:18:09.689848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.738 qpair failed and we were unable to recover it. 00:34:44.738 [2024-12-05 12:18:09.690227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.690256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.690519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.690580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.690956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.690985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.691357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.691386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.691753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.691785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.692155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.692185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.692556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.692587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.692941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.692969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.693333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.693362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.693723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.693754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.694138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.694513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.694543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.694973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.695003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.695339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.695368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.695760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.695792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.696034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.696064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.696429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.696494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.696852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.696881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.697245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.697276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.697641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.697674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.698054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.698085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.698542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.698572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.698940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.698971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.699350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.699381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.699778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.699811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.700154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.700185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.700555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.700587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.700966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.700998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.701354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.701384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.701660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.701691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.702052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.702081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.702380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.702410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.702662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.702691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.703067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.703097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.703197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.703226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.703597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.703630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.703989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.704018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.704405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.704435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.704791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.704821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.705192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.705221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.705600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.705631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.739 qpair failed and we were unable to recover it. 00:34:44.739 [2024-12-05 12:18:09.705855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.739 [2024-12-05 12:18:09.705885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.706284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.706313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.706699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.706730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.706989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.707019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.707363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.707392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.707648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.707679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.708037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.708065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.708421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.708450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.708865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.708896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.709147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.709187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.709576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.709606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.709815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.709845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.710063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.710092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.710515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.710546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.710887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.710917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.711301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.711331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.711579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.711611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.711983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.712012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.712419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.712804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.712834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.713216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.713244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.713708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.713737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.713851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.713882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.714276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.714305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.714705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.714735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.715117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.715145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.715520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.715550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.715933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.715963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.716334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.716362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.716719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.716749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.717122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.717151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.717381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.717410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.717785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.717815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.718172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.718201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.718467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.718497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.718763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.718792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.719169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.719199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.740 [2024-12-05 12:18:09.719446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.740 [2024-12-05 12:18:09.719487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.740 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.719847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.719876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.720253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.720282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.720667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.720698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.721071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.721099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.721254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.721281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.721664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.721693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.722090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.722119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.722342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.722370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.722659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.722687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.722939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.722970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.723214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.723244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.723609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.723645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.724006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.724034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.724413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.724442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.724818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.724847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.725055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.725084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.725184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.725210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.725592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.725622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.725840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.725868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.726240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.726269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.726637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.726669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.727051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.727081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.727444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.727484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.727901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.727931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.728305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.728333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.728732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.728763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.729135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.729165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.729535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.729564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.729838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.729868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.730242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.730271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.730630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.730660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.730930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.730959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.731156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.731185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.731533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.731563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.731884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.731913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.732278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.732307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.732672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.732702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.732923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.732951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.733335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.733364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.733711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.733741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.733986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.734019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.741 [2024-12-05 12:18:09.734385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.741 [2024-12-05 12:18:09.734414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.741 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.734840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.734870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.735246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.735276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.735506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.735536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.735907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.735936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.736153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.736182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.736549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.736580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.736928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.736958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.737207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.737237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.737606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.737637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.738008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.738043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.738419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.738448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.738836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.738865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.739210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.739238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.739505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.739535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.739878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.739907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.740287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.740318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.740575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.740605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.741044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.741074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.741440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.741479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.741734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.741762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.742130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.742159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.742528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.742560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.742981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.743010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.743412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.743446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.743831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.743861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.744072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.744101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.744337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.744366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.744695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.744725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.745031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.745060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.745408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.745437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.745811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.745841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.746203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.746235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.746589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.746620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.746839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.746869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.747234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.747263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.747659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.747691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.748080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.748108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.748487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.748517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.748731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.748760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.749157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.749186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.749563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.749593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.749973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.742 [2024-12-05 12:18:09.750001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.742 qpair failed and we were unable to recover it. 00:34:44.742 [2024-12-05 12:18:09.750453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.750492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.750712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.750744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.751105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.751134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.751503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.751534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.751791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.751822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.752202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.752230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.752609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.752641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.752961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.752996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.753358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.753387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.753648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.753679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.754059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.754087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.754325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.754353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.754609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.754640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.754892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.754921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.755277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.755307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.755710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.755742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.755989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.756018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.756377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.756405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.756801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.756831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.757206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.757237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.757609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.757640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.758018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.758048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.758415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.758445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.758676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.758706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.759083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.759112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.759523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.759555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.759906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.759936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.760153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.760182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.760550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.760582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.760861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.760890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.761279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.761308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.761588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.761621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.761965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.761994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.762216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.762250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.762638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.762670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.763029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.763059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:44.743 [2024-12-05 12:18:09.763432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:44.743 [2024-12-05 12:18:09.763470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:44.743 qpair failed and we were unable to recover it. 00:34:45.013 [2024-12-05 12:18:09.763851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.013 [2024-12-05 12:18:09.763883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.013 qpair failed and we were unable to recover it. 00:34:45.013 [2024-12-05 12:18:09.764333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.013 [2024-12-05 12:18:09.764364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.013 qpair failed and we were unable to recover it. 00:34:45.013 [2024-12-05 12:18:09.764711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.013 [2024-12-05 12:18:09.764745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.013 qpair failed and we were unable to recover it. 00:34:45.013 [2024-12-05 12:18:09.765133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.013 [2024-12-05 12:18:09.765162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.765540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.765571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.765931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.765961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.766337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.766365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.766728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.766760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.766986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.767015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.767369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.767398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.767775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.767812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.768150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.768178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.768422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.768451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.768861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.768890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.769257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.769287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.769542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.769576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.769964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.769993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.770370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.770400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.770768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.770800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.771183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.771213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.771578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.771609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.772040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.772069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.772435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.772485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.772877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.772908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.773348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.773377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.773722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.773752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.773994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.774023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.774387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.774416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.774804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.774834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.775050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.775078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.775321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.775351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.775741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.775771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.776039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.776071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.776335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.776364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.776756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.776786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.777149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.777178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.777548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.777578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.777955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.777984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.778372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.778400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.778757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.778788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.779153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.014 [2024-12-05 12:18:09.779182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.014 qpair failed and we were unable to recover it. 00:34:45.014 [2024-12-05 12:18:09.779536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.779567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.779954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.779983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.780393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.780815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.780845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.781073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.781103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.781479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.781921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.781950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.782319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.782348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.782734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.782764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.783129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.783164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.783606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.783637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.783975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.784003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.784263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.784292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.784638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.784670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.785040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.785069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.785439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.785483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.785868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.785896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.786263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.786291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.786658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.786688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.787068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.787097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.787478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.787508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.787872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.787902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.788280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.788310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.788677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.788708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.789083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.789112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.789485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.789516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.789851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.789880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.790331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.790361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.790701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.790735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.790949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.790978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.791348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.791377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.791761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.791792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.792160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.792189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.792425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.792464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.792839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.792869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.793250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.793279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.793684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.793722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.015 [2024-12-05 12:18:09.794086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.015 [2024-12-05 12:18:09.794115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.015 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.794393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.794421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.794826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.794856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.795207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.795236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.795452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.795491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.795814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.795843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.796203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.796233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.796591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.796622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.797001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.797031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.797324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.797354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.797704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.797735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.798160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.798190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.798534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.798572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.798820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.798849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.799225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.799255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.799626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.799656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.800004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.800033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.800404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.800432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.800900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.800931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.801152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.801181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.801545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.801576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.801954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.801983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.802245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.802273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.802561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.802593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.802973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.803002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.803241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.803273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.803634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.803665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.804036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.804067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.804438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.804474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.804615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.804643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.805028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.805056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.805433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.805470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.805811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.805840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.806222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.806251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.806612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.806643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.807013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.807043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.807406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.807435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.807823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.807854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.808223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.016 [2024-12-05 12:18:09.808251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.016 qpair failed and we were unable to recover it. 00:34:45.016 [2024-12-05 12:18:09.808433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.808471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.808715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.808745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.809126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.809154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.809509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.809540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.809769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.809800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.810176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.810205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.810571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.810603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.811031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.811060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.811402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.811434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.811803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.811834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.812195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.812226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.812599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.812629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.812989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.813019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.813250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.813285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.813545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.813575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.813955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.813984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.814341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.814371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.814791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.814823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.815170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.815564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.815595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.815820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.815850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.816222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.816251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.816615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.816645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.817030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.817060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.817285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.817314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.817700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.817730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.817983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.818015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.818390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.818420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.818861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.818891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.819157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.819185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.819554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.819585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.819940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.819970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.820352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.820380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.820783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.820813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.821253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.821283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.821678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.821707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.821922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.821951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.822325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.822355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.017 qpair failed and we were unable to recover it. 00:34:45.017 [2024-12-05 12:18:09.822722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.017 [2024-12-05 12:18:09.822753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.823138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.823168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.823546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.823580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.823967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.823997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.824443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.824480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.824829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.824859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.825255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.825284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.825523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.825552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.825933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.825962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.826339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.826368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.826597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.826628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.827046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.827074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.827449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.827497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.827840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.827869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.828288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.828316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.828677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.828714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.828937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.828966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.829335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.829364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.829720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.829751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.830125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.830155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.830606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.830637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.831002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.831032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.831469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.831499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.018 [2024-12-05 12:18:09.831729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.018 [2024-12-05 12:18:09.831758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.018 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.832170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.832199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.832567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.832597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.832873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.832902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.833267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.833681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.833711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.833937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.833967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.834282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.834310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.834685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.834715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.835095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.835124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.835361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.835393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.835712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.835747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.836115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.836145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.836522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.836554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.836942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.836973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.837318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.837349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.837621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.837652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.837998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.838027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.838428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.838465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.838727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.838756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.839017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.839049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.839413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.839444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.839828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.839858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.840224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.840253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.840624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.840655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.840904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.840934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.841313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.841343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.841680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.841710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.842121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.842150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.842512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.842542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.842925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.842955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.843306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.843335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.843572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.843610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.843968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.843997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.844368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.844397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.844657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.844688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.845030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.845060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.845423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.845452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.019 [2024-12-05 12:18:09.845813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.019 [2024-12-05 12:18:09.845843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.019 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.846053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.846087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.846329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.846358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.846720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.846751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.846986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.847015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.847262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.847291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.847569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.847598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.847953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.847982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.848362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.848390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.848735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.848765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.849021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.849051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.849316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.849346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.849649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.849680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.850029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.850059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.850425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.850464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.850750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.850779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.851013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.851043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.851434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.851483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.851871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.851900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.852261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.852290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.852516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.852548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.852975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.853006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.853370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.853400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.853827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.853857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.854066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.854095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.854467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.854498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.854902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.854931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.855298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.855327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.855680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.855711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.856070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.856099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.856472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.856504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.856912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.856942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.857179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.857211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.857564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.857595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.857942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.857978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.858348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.858377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.858616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.858646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.859002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.859030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.859352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.020 [2024-12-05 12:18:09.859380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.020 qpair failed and we were unable to recover it. 00:34:45.020 [2024-12-05 12:18:09.859724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.859756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.860120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.860149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.860558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.860588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.860935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.860965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.861338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.861367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.861585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.861617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.861993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.862023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.862391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.862420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.862840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.862870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.863279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.863309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.863648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.863679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.864054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.864083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.864309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.864339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.864580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.864615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.865057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.865088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.865226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.865255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.865656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.865687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.866030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.866060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.866430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.866466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.866875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.866904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.867270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.867300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.867560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.867591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.867825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.867855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.868302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.868332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.868698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.868728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.868991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.869021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.869383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.869412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.869816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.869847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.870215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.870245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.870607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.870638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.870984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.871013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.871264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.871608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.871639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.871908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.871939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.872314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.872344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.872580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.872613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.872956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.872986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.873280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.021 [2024-12-05 12:18:09.873309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.021 qpair failed and we were unable to recover it. 00:34:45.021 [2024-12-05 12:18:09.873661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.873692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.874058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.874087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.874311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.874341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.874641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.874673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.875055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.875084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.875313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.875342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.875720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.875751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.876049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.876079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.876442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.876479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.876869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.876898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.877111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.877141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.877512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.877543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.877920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.877949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.878321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.878350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.878615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.878646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.879024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.879054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.879281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.879311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.879661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.879693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.880070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.880099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.880452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.880498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.880735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.880764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.881145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.881174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.881389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.881419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.881786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.881818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.882171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.882207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.882578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.882609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.882850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.882881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.883242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.883272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.883629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.883661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.884038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.884069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.884439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.884488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.884844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.884873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.885125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.885154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.885511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.885542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.885775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.885806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.886169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.886199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.886429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.886469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.886851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.886880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.022 qpair failed and we were unable to recover it. 00:34:45.022 [2024-12-05 12:18:09.887255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.022 [2024-12-05 12:18:09.887285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.887718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.887750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.888105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.888135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.888424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.888473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.888684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.888715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.888948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.888978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.889333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.889362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.889710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.889741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.890135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.890164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.890539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.890568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.890930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.890959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.891340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.891369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.891753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.891784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.892199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.892228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.892591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.892622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.893066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.893095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.893428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.893463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.893858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.893887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.894266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.894294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.894520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.894550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.894781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.894810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.895215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.895243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.895475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.895506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.895904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.895933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.896178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.896206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.896561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.896591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.896964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.896999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.897208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.897238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.897649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.897679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.898027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.898056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.898418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.898448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.898867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.898897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.899141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.899169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.899531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.899561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.023 [2024-12-05 12:18:09.899949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.023 [2024-12-05 12:18:09.899978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.023 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.900332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.900361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.900732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.900763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.901111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.901140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.901495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.901525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.901872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.901901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.902272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.902301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.902400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.902427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.902794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.902824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.903040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.903068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.903442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.903478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.903750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.903780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.904140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.904169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.904284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.904314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.904665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.904696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.905073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.905102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.905404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.905433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.905816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.905846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.906253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.906282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.906622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.906652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.907017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.907047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.907419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.907447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.907814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.907843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.908255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.908283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.908635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.908666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.908765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.908791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.909054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.909082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.909472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.909502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.909931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.909960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.910304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.910333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.910585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.910615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.910979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.911007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.911377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.911412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.911795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.911825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.912213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.912242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.912491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.912523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.912852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.912881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.913252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.913282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.913645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.024 [2024-12-05 12:18:09.913675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.024 qpair failed and we were unable to recover it. 00:34:45.024 [2024-12-05 12:18:09.913880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.913908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.914273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.914302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.914677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.914707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.915089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.915119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.915495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.915526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.915769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.915797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.916052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.916084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.916481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.916512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.916895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.916925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.917274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.917303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.917754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.917784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.917989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.918018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.918394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.918423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.918763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.918792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.919171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.919201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.919336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.919365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.919713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.919743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.920116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.920144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.920513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.920543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.920916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.920944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.921177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.921205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.921560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.921590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.921970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.921999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.922331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.922361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.922744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.922775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.923025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.923053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.923226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.923255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.923525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.923555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.923802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.923830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.924053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.924082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.924482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.924513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.924868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.924898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.925245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.925273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.925493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.925530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.925896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.925925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.926299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.926329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.926596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.926626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.927010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.927039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.025 qpair failed and we were unable to recover it. 00:34:45.025 [2024-12-05 12:18:09.927416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.025 [2024-12-05 12:18:09.927445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.927795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.927827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.928201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.928230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.928577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.928607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.928984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.929013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.929348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.929377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.929746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.929776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.930005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.930034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.930273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.930306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.930573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.930604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.930952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.930981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.931202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.931230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.931582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.931612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.931970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.931999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.932368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.932397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.932762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.932793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.933003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.933032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.933396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.933427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.933789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.933820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.934190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.934222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.934432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.934476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.934849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.934879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.935250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.935282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.935655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.935686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.936112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.936477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.936509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.936875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.936904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.937283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.937313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.937705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.937738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.938112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.938141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.938514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.938545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.938910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.938940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.939035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.939063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.939340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.939369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.939625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.939656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.940057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.940099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.940474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.940505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.940660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.940700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.026 [2024-12-05 12:18:09.941077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.026 [2024-12-05 12:18:09.941107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.026 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.941476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.941506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.941843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.941872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.942131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.942161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.942518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.942548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.942956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.942985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.943355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.943384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.943765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.943796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.944166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.944197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.944580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.944611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.944956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.944985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.945372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.945403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.945672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.945703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.946070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.946099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.946298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.946328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.946752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.946783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.947213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.947243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.947507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.947538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.947794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.947823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.948065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.948094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.948532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.948564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.948929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.948958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.949183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.949212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.949374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.949402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.949785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.949817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.950131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.950160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.950530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.950559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.950850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.950880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.951241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.951271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.951625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.951656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.952009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.952038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.952383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.952413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.952686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.952717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.953090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.953120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.953374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.953406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.027 qpair failed and we were unable to recover it. 00:34:45.027 [2024-12-05 12:18:09.953847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.027 [2024-12-05 12:18:09.953878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.954244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.954273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.954635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.954672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.955043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.955072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.955441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.955482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.955734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.955763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.956133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.956163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.956533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.956565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.956788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.956818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.957188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.957593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.957623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.957834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.957864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.958206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.958235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.958594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.958624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.958920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.958951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.959175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.959204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.959497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.959528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.959900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.959929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.960172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.960205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.960577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.960609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.960985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.961014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.961298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.961328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.961720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.961752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.962136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.962165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.962537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.962567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.963002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.963032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.963369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.963400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.963761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.963792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.964157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.964186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.964530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.964564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.964928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.964957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.965334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.965364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.965715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.965746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.966112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.966142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.966510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.966541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.966801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.966835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.967188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.967217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.967606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.967636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.028 [2024-12-05 12:18:09.967904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.028 [2024-12-05 12:18:09.967933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.028 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.968151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.968181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.968589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.968619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.969036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.969066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.969432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.969485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.969738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.969768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.970130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.970159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.970390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.970423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.970652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.970684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.971041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.971071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.971440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.971479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.971717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.971747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.972112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.972141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.972519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.972551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.972921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.972952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.973301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.973331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.973547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.973579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.974040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.974070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.974436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.974475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.974884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.974915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.975279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.975310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.975697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.975728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.976074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.976104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.976478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.976510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.976883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.976913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.977123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.977153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.977512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.977543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.977916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.977945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.978170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.978201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.978363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.978393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.978548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.978580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.978970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.979001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.979365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.979395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.979768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.979798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.980168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.980200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.980544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.980575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.980834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.980867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.981239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.981268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.029 [2024-12-05 12:18:09.981654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.029 [2024-12-05 12:18:09.981685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.029 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.982056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.982087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.982433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.982472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.982833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.982864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.983170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.983199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.983580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.983611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.983965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.984001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.984361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.984390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.984651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.984687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.984908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.984938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.985143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.985173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.985380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.985415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.985818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.985850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.986062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.986091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.986321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.986358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.986771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.986802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.987152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.987182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.987549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.987582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.987931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.987960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.988333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.988363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.988759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.988789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.989157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.989192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.989557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.989588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.989683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.989710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.990043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.990072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.990291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.990321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.990688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.990719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.990935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.990964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.991336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.991365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.991608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.991639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.991889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.991918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.992264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.992293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.030 qpair failed and we were unable to recover it. 00:34:45.030 [2024-12-05 12:18:09.992561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.030 [2024-12-05 12:18:09.992592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.993017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.993048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.993379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.993409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.993791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.993822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.994174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.994204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.994574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.994605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.994961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.994991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.995362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.995392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.995770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.995801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.996173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.996203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.996575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.996605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.996832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.996862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.997126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.997156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.997395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.997424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.997772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.997808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.998022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.998053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.998298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.998328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.998537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.998575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.998943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.998973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.999320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.999350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.999595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.999626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:09.999867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:09.999897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.000119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.000149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.000406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.000438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.000828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.000858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.001100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.001131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.001407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.001437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.001817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.001847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.002085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.002116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.002493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.002525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.002817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.002858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.003714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.003746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.004134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.004167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.031 [2024-12-05 12:18:10.004391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.031 [2024-12-05 12:18:10.004424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2788000b90 with addr=10.0.0.2, port=4420 00:34:45.031 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.004884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.004982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.005412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.005490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.005757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.005805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.006107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.006158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.006487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.006534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.006870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.006916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.007248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.007291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.007731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.007791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.008062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.008106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.008521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.008565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.008981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.009036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.009369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.009409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.009824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.009875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.010153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.010595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.010628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.010916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.010947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.011320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.011350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.011786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.011820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.012065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.012094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.012395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.012430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.012832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.012862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.013242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.013274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.013535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.013566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.013790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.013820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.014239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.014268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.014634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.014665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.015039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.015084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.015267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.015316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.015735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.015768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.016123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.016152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.016415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.016445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.016798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.016829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.017195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.017225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.017611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.032 [2024-12-05 12:18:10.017643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.032 qpair failed and we were unable to recover it. 00:34:45.032 [2024-12-05 12:18:10.018025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.018065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.018421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.018452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.018863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.018894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.019127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.019158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.019442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.019495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.019888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.019918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.020144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.020174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.020532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.020565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.020944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.020973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.021186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.021216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.021592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.021898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.021927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.022224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.022255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.022589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.022621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.022988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.023020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.023225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.023256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.023621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.023651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.024035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.024064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.024429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.024469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.024811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.024840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.025192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.025222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.025663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.025695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.025929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.025959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.026335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.026365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.026578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.026609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.026936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.026965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.027338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.027367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.027758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.027789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.028043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.028072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.028278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.028307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.028527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.028557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.028922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.028953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.029313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.029343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.029649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.029680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.030057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.030089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.030311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.030342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.030612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.030643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.031053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.031103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.031406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.033 [2024-12-05 12:18:10.031450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.033 qpair failed and we were unable to recover it. 00:34:45.033 [2024-12-05 12:18:10.031770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.031819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.032161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.032210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.032370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.032436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.032621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.032666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.032971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.033019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.033322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.033369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.033775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.033821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.034111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.034153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.034489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.034549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.034877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.034930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.035250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.035295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.035614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.035665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.035851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.035898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.036326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.036375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.036654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.036699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.036985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.037035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.037466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.037516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.037769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.037818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.038234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.038280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.038672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.038720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.039014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.039065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.039471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.039517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.039905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.039953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.040238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.040283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.040765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.040811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.041299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.041344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.041617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.041662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.041934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.041981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.042436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.042497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.042756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.042815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.043219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.043271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.043580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.043626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.044037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.044087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.044420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.044481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.044882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.044926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.045189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.045237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.045541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.045585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.045986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.046034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.046275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.034 [2024-12-05 12:18:10.046322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.034 qpair failed and we were unable to recover it. 00:34:45.034 [2024-12-05 12:18:10.046646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.046692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.047018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.047060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.047483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.047529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.047907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.047955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.048346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.048389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.048806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.048854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.049251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.049300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.049544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.049592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.049867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.049916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.050307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.050354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.050634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.050683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.051052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.051086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.051494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.051526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.051913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.051942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.052189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.052218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.052593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.052629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.053081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.053111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.035 [2024-12-05 12:18:10.053469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.035 [2024-12-05 12:18:10.053500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.035 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.053860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.053897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.054287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.054321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.054681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.054711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.055006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.055039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.055394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.055424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.055851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.055882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.056226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.056257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.056631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.056666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.057036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.057066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.057269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.057302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.057685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.057718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.058085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.058117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.058478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.058511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.058865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.058904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.059288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.059322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.059693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.059728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.060064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.306 [2024-12-05 12:18:10.060096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.306 qpair failed and we were unable to recover it. 00:34:45.306 [2024-12-05 12:18:10.060477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.060511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.060671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.060698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.061050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.061076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.061447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.061477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.061796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.061820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.062153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.062174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.062452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.062479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.062831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.062852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.063053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.063079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.063439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.063473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.063863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.063887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.064290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.064312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.064520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.064545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.064868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.064890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.065213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.065234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.065570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.065596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.066039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.066062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.066407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.066428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.066789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.066812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.067142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.067165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.067503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.067528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.067887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.067914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.068282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.068305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.068702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.068725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.069097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.069119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.069443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.069474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.069685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.069711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.069818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.069842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.070197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.070222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.070441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.070473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.070859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.070885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.071277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.071301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.071677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.071702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.072084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.072108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.072480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.072502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.072883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.072909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.073270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.073295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.073673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.073697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.307 qpair failed and we were unable to recover it. 00:34:45.307 [2024-12-05 12:18:10.073956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.307 [2024-12-05 12:18:10.073978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.074232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.074260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.074638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.074664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.074817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.074837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.075176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.075199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.075510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.075542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.075901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.075936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.076325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.076349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.076669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.076686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.077024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.077036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.077271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.077282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.077594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.077607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.077944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.077956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.078175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.078187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.078576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.078588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.078776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.078788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.078979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.078989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.079195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.079206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.079434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.079449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.079768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.079781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.080159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.080172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.080356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.080367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.080707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.080718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.080959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.080970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.081413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.081425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.081728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.081739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.082063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.082077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.082422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.082433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.082755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.082766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.083095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.083105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.083529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.083540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.083771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.083782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.084131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.084144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.084504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.084515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.084689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.084701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.084990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.085002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.085319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.085331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.085729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.085742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.086086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.308 [2024-12-05 12:18:10.086097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.308 qpair failed and we were unable to recover it. 00:34:45.308 [2024-12-05 12:18:10.086335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.086347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.086561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.086572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.086891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.086902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.087312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.087322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.087584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.087595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.087774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.087783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.088076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.088088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.088292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.088304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.088547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.088559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.088776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.088797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.089172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.089186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.089531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.089547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.089906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.089919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.089983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.089995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.090316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.090330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.090693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.090708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.090903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.090928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.091271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.091285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.091595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.091609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.091939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.091954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.092380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.092394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.092710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.092724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.092972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.092985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.093188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.093203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.093514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.093528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.093862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.093876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.094079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.094092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.094309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.094323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.094716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.094730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.095007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.095020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.095213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.095227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.095535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.095549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.095875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.095889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.096105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.096118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.096303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.096316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.096625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.096639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.096992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.097006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.097345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.097360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.309 [2024-12-05 12:18:10.097694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.309 [2024-12-05 12:18:10.097709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.309 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.097787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.097799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.098124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.098140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.098339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.098353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.098551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.098566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.098903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.098917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.099252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.099266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.099495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.099516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.099881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.099899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.100262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.100280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.100621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.100640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.100984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.101001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.101336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.101355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.101713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.101731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.101961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.101980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.102302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.102322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.102604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.102623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.102972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.102994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.103324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.103343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.103678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.103696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.104044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.104061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.104261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.104281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.104625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.104644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.104991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.105008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.105150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.105166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.105489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.105508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.105712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.105730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.105913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.105930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.106215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.106233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.106609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.106628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.107036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.107055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.107403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.107421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.107792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.107811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.108139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.108156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.108496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.108515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.108936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.108954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.109298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.109316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.109409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.109427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.109757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.109775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.310 qpair failed and we were unable to recover it. 00:34:45.310 [2024-12-05 12:18:10.109990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.310 [2024-12-05 12:18:10.110009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.110362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.110379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.110696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.110716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.111079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.111098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.111307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.111324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.111530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.111548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.111802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.111820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.311 [2024-12-05 12:18:10.112002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.311 [2024-12-05 12:18:10.112021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.311 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.112363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.112380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.112735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.112760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.113094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.113118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.113448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.113481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.113683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.113711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.113990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.114015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.114132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.114154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.114488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.114515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.114876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.114900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.115239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.115263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.115612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.115637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.115987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.116017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.116384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.116408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.116781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.116807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.117030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.117054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.117276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.117301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.117644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.117670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.118035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.118061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.118439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.118479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.118705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.118731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.118991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.119015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.119348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.119372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.119501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.119524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.119987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.120103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.120434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.120493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.120867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.120899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.121251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.121282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.121773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.121881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f277c000b90 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.122247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.122275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.122559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.122585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.122982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.123013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.123392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.123422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.123674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.123707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.124111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.124141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.124364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.124395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.124774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.124805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.125183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.125212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.125476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.125514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.312 qpair failed and we were unable to recover it. 00:34:45.312 [2024-12-05 12:18:10.125905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.312 [2024-12-05 12:18:10.125940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.126321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.126353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.126709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.126740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.127109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.127140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.127494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.127526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.127948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.127979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.128211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.128240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.128602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.128633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.128997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.129027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.129411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.129441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.129679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.129710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.130113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.130143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.130519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.130551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.130777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.130807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.131207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.131238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.131602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.131634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.131996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.132026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.132283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.132312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.132657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.132689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.133053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.133083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.133419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.133451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.133839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.133872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.134219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.134250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.134627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.134658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.134890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.134921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.135176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.135208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.135434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.135474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.135692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.135723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.135997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.136028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.136510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.136540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.136784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.136814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.137185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.137215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.137579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.137610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.137881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.137911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.138184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.138216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.138579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.138612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.138993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.139023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.139249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.139281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.139524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.313 [2024-12-05 12:18:10.139557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.313 qpair failed and we were unable to recover it. 00:34:45.313 [2024-12-05 12:18:10.139940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.139972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.140220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.140251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.140728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.140760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.141004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.141034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.141510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.141541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.141924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.141954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.142371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.142402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.142768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.142800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.143213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.143243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.143502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.143534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.143755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.143784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.144028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.144060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.144498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.144535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.144775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.144806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.145150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.145181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.145562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.145594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.146022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.146076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.146377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.146441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.146657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.146700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.146893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.146936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.147249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.147290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.147564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.147601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.147853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.147884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.148231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.148261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.148518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.148551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.148797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.148827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.149064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.149095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.149564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.149597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.149934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.149965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.150318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.150356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.150768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.150801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.151164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.151197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.151484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.151517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.151790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.151820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.152223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.152253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.152629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.152660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.153029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.153060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.153350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.153384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.314 [2024-12-05 12:18:10.153661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.314 [2024-12-05 12:18:10.153693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.314 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.153928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.153957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.154334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.154363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.154727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.154758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.155127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.155157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.155526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.155561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.155799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.155831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.156086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.156117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.156357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.156388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.156753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.156786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.157148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.157178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.157469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.157500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.157838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.157871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.158081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.158112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.158517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.158549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.158894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.158925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.159132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.159163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.159421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.159451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.159844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.159874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.160260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.160292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.160775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.160807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.161184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.161214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.161338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.161369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Read completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 Write completed with error (sct=0, sc=8) 00:34:45.315 starting I/O failed 00:34:45.315 [2024-12-05 12:18:10.162198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:34:45.315 [2024-12-05 12:18:10.162781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.162902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2780000b90 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.163357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.163396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f2780000b90 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.163890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.163924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.164380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.164410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.164794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.164825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.315 qpair failed and we were unable to recover it. 00:34:45.315 [2024-12-05 12:18:10.165176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.315 [2024-12-05 12:18:10.165207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.165540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.165572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.165970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.166000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.166394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.166425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.166705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.166738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.167102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.167134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.167524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.167556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.167793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.167826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.168219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.168251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.168639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.168670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.168934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.168967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.169267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.169299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.169634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.169666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.169894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.169925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.170326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.170355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.170616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.170647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.171031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.171061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.171174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.171206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.171499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.171530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.171832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.171862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.172241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.172270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.172686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.172717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.173081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.173109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.173358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.173389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.173675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.173712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.174102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.174133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.174355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.174385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.174762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.174792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.175152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.175181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.175397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.175425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.175781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.175811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.176081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.176110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.176402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.176432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.176677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.176707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.177104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.177133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.177510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.316 [2024-12-05 12:18:10.177541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.316 qpair failed and we were unable to recover it. 00:34:45.316 [2024-12-05 12:18:10.177799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.177828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.178204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.178234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.178475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.178507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.178794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.178824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.179080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.179109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.179491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.179522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.179818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.179848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.180269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.180299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.180692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.180724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.181084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.181114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.181575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.181606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.182064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.182095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.182450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.182488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.182763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.182792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.183149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.183178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.183473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.183503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.183674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.183703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.183956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.183984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.184115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.184144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.184483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.184515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.184887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.184917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.185271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.185300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.185691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.185722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.186100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.186130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.186365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.186395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.186772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.186802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.187183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.187212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.187585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.187615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.187841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.187872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.188241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.188271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.188509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.188539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.188900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.188929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.189287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.189316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.189539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.189570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.189938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.189968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.190189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.190218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.190595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.190626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.191001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.191030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.317 [2024-12-05 12:18:10.191422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.317 [2024-12-05 12:18:10.191452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.317 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.191565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.191593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.191863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.191892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.192053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.192082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.192355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.192385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.192656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.192686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.192908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.192937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.193303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.193334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.193744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.193775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.194138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.194167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.194398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.194430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.194671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.194702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.195115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.195144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.195524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.195555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.195926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.195955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.196332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.196362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.196622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.196652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.197015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.197044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.197300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.197340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.197793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.197825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.198070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.198099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.198475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.198506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.198965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.198994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.199260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.199290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.199571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.199601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.199820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.199851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.200239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.200269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.200726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.200756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.200968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.200997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.201380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.201408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.201831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.201861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.202243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.202273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.202633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.202665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.203110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.203139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.203571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.203600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.203965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.203994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.204232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.204262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.204663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.204696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.205102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.205131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.318 [2024-12-05 12:18:10.205564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.318 [2024-12-05 12:18:10.205594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.318 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.205809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.205839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.206199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.206230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.206604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.206635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.207015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.207045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.207443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.207502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.207849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.207878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.208259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.208289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.208503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.208534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.208806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.208836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.209238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.209266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.209662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.209692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.210059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.210088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.210472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.210504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:45.319 [2024-12-05 12:18:10.210864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.210897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:34:45.319 [2024-12-05 12:18:10.211259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.211290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:45.319 [2024-12-05 12:18:10.211519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.211552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:45.319 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.319 [2024-12-05 12:18:10.211911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.211943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.212323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.212354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.212712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.212743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.212999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.213030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.213240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.213270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.213645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.213677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.214028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.214057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.214423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.214452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.214867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.214897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.215235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.215266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.215491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.215925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.216204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.216235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.216499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.216532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.216930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.216960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.217331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.217364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.217765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.217796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.217891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.217918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.218215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.218245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.218532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.218563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.319 [2024-12-05 12:18:10.218915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.319 [2024-12-05 12:18:10.218944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.319 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.219319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.219350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.219678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.219710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.220093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.220123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.220336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.220366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.220715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.220748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.221119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.221151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.221375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.221405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.221862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.221899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.222246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.222277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.222630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.222661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.223016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.223047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.223486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.223519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.223893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.223923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.224282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.224312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.224655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.224688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.225061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.225092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.225476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.225506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.225726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.225759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.226009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.226046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.226480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.226512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.226811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.226841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.227183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.227213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.227474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.227505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.227889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.227918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.228290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.228321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.228524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.228556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.228937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.228967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.229410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.229441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.229840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.229870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.230252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.230282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.230542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.230575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.230802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.230834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.231095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.231126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.231502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.231533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.320 [2024-12-05 12:18:10.231929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.320 [2024-12-05 12:18:10.231965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.320 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.232315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.232345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.232741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.232772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.233016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.233049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.233426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.233464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.233833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.233863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.234086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.234117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.234512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.234543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.234906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.234935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.235308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.235339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.235555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.235590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.235830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.235860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.236200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.236229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.236589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.236621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.236847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.236878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.237261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.237292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.237653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.237685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.238069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.238098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.238484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.238516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.238871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.238902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.239274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.239305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.239720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.239751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.240124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.240155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.240520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.240551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.240798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.240828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.241090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.241120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.241495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.241525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.241890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.241920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.242285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.242315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.242725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.242757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.243172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.243202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.243570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.243601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.243983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.244014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.244243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.244274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.244492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.244526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.244886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.244917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.245332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.245361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.245718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.245750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.246124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.246154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.321 qpair failed and we were unable to recover it. 00:34:45.321 [2024-12-05 12:18:10.246540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.321 [2024-12-05 12:18:10.246571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.246879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.246909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.247165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.247202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.247420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.247450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.247847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.247877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.248257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.248289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.248655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.248686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.249069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.249100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.249476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.249508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.249727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.249757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.250005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.250036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.250407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.250438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.250845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.250876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.251125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.251154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.251530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.251564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.251783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.252052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.252085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.252479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.252511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.252762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.252793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.253027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.253057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.253302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.253332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.253556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.253590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.253975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.254005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.254230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.254260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:45.322 [2024-12-05 12:18:10.254636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.254668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:45.322 [2024-12-05 12:18:10.255089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.255120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.322 [2024-12-05 12:18:10.255352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.255384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.322 [2024-12-05 12:18:10.255746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.255779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.256157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.256187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.256478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.256510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.256859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.256889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.257251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.257282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.257670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.257702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.258070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.258101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.258475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.258506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.258865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.258894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.259299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.259331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.322 qpair failed and we were unable to recover it. 00:34:45.322 [2024-12-05 12:18:10.259697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.322 [2024-12-05 12:18:10.259727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.259957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.259986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.260300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.260330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.260707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.260739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.261073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.261103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.261259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.261289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.261671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.261702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.261963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.261995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.262362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.262394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.262780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.262813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.263157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.263187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.263552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.263582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.263921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.263951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.264349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.264380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.264808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.264839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.265208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.265237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.265609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.265639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.266024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.266054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.266317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.266349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.266711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.266742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.267126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.267156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.267531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.267561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.267965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.267994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.268224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.268254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.268492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.268522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.268880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.268910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.269285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.269316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.269742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.269773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.269998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.270028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.270392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.270423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.270817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.270847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.271209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.271245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.271703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.271735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.272092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.272121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.272336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.272364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.272580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.272611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.272853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.272882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.273237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.273268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.273631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.323 [2024-12-05 12:18:10.273663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.323 qpair failed and we were unable to recover it. 00:34:45.323 [2024-12-05 12:18:10.274032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.274061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.274332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.274363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.274619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.274650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.274990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.275020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.275389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.275420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.275855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.275886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.276247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.276279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.276664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.276695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.277116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.277145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.277490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.277521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.277869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.277898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.278116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.278146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.278523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.278555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.278954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.278984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.279225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.279254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.279513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.279544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.279934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.279963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.280350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.280381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.280774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.280807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.281066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.281096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.281427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.281468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.281731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.281761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.282113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.282143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.282531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.282562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.282934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.282968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.283220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.283251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.283634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.283665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.283934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.283964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.284327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.284357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.284724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.284755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.285148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.285179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.285397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.285426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.285811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.285841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.286216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.286247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.286607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.286638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.286994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.287025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.287391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.287422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.287675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.287708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.288116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.324 [2024-12-05 12:18:10.288145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.324 qpair failed and we were unable to recover it. 00:34:45.324 [2024-12-05 12:18:10.288517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.288550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.288803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.288836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.289136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.289166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 Malloc0 00:34:45.325 [2024-12-05 12:18:10.289545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.289575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.289927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.289956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.290327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.290356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.290622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.290656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:45.325 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.325 [2024-12-05 12:18:10.291123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.291153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.325 [2024-12-05 12:18:10.291502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.291533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.291935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.291964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.292237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.292267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.292510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.292543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.292915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.292945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.293323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.293353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.293591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.293624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.294068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.294099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.294451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.294492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.294854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.294883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.295252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.295282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.295658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.295690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.295919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.295951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.296202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.296232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.296495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.296527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.296918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.296920] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:45.325 [2024-12-05 12:18:10.296947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.297194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.297223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.297583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.297614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.297997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.298027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.298393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.298423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.298832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.298864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.298980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.299007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.299308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.299337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.299728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.299760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.300124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.300155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.325 [2024-12-05 12:18:10.300527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.325 [2024-12-05 12:18:10.300561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.325 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.300921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.300950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.301363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.301396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.301571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.301604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.301856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.301886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.302271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.302300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.302730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.302760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.303070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.303100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.303247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.303279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.303618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.303649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.304035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.304064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.304435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.304473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.304740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.304770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.304924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.304958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.305221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.305250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.305487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.305517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.305639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.305665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.326 [2024-12-05 12:18:10.306019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.306049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:45.326 [2024-12-05 12:18:10.306499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.306529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.326 [2024-12-05 12:18:10.306857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.306886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.326 [2024-12-05 12:18:10.307310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.307341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.307595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.307626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.307981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.308011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.308358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.308387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.308695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.308728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.309131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.309161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.309634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.309666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.309970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.310000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.310237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.310266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.310635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.310665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.311046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.311076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.311429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.311491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.311881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.311913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.312223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.312254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.312709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.312740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.313086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.313115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.313487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.313518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.326 [2024-12-05 12:18:10.313738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.326 [2024-12-05 12:18:10.313767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.326 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.314155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.314187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.314535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.314598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.314990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.315019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.315374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.315405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.315793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.315824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.316190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.316222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.316599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.316630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.316985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.317014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.317417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.317447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.317728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.317759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.327 [2024-12-05 12:18:10.318098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.318128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:45.327 [2024-12-05 12:18:10.318510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.318540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.327 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.327 [2024-12-05 12:18:10.318930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.318967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.319340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.319371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.319726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.319756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.320028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.320058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.320434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.320479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.320599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.320631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.320999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.321031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.321234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.321263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.321640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.321674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.321915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.321946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.322322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.322353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.322560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.322592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.322991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.323022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.323419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.323450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.323844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.323876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.324122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.324152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.324568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.324602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.324967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.324998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.325369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.325401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.325642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.325674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.326048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.326080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.326445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.326495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.326926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.326960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.327179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.327211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.327 [2024-12-05 12:18:10.327600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.327 [2024-12-05 12:18:10.327638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.327 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.327993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.328023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.328394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.328424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.328794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.328825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.329225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.329255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.329620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.329652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.329902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.329935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.328 [2024-12-05 12:18:10.330306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.330338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.328 [2024-12-05 12:18:10.330603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.330636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.328 [2024-12-05 12:18:10.330834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.330865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.328 [2024-12-05 12:18:10.331241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.331271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.331617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.331648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.331979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.332011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.332233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.332263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.332621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.332652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.332907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.332943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.333308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.333339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.333592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.333622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.334013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.334044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.334345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.334379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.334645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.334679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.334942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.334971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.335287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.335318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.335790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.335821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.336183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.336214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.336465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.336498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.336901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.336930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.337140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:45.328 [2024-12-05 12:18:10.337168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8d40c0 with addr=10.0.0.2, port=4420 00:34:45.328 qpair failed and we were unable to recover it. 00:34:45.328 [2024-12-05 12:18:10.337300] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.328 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:45.590 [2024-12-05 12:18:10.348224] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.348377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.348432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.348471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.348493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.348549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.590 12:18:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1553862 00:34:45.590 [2024-12-05 12:18:10.358025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.358140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.358173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.358190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.358204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.358236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.368105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.368182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.368207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.368218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.368228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.368252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.378100] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.378228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.378245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.378254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.378268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.378286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.388070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.388147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.388165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.388173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.388179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.388195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.397903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.397973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.397989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.397997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.398003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.398019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.408042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.408135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.408153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.408161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.408167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.408184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.418090] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.418161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.418179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.418186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.418192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.418209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.428182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.428257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.428275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.428283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.428290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.428306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.438162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.438271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.590 [2024-12-05 12:18:10.438291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.590 [2024-12-05 12:18:10.438298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.590 [2024-12-05 12:18:10.438305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.590 [2024-12-05 12:18:10.438322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.590 qpair failed and we were unable to recover it. 00:34:45.590 [2024-12-05 12:18:10.448167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.590 [2024-12-05 12:18:10.448229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.448248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.448256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.448263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.448279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.458208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.458274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.458292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.458299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.458305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.458321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.468131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.468202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.468231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.468238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.468245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.468261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.478252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.478318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.478340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.478348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.478358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.478375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.488283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.488352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.488372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.488379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.488386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.488403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.498352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.498425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.498443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.498450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.498464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.498481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.508351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.508422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.508439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.508447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.508466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.508483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.518374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.518441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.518467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.518474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.518481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.518497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.528384] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.528447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.528471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.528478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.528485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.528501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.538328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.538396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.538414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.538422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.538428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.538444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.548672] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.548763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.548781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.548789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.548795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.548812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.558571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.558640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.558658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.558666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.558673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.558690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.568596] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.568659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.568679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.568687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.591 [2024-12-05 12:18:10.568694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.591 [2024-12-05 12:18:10.568710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.591 qpair failed and we were unable to recover it. 00:34:45.591 [2024-12-05 12:18:10.578495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.591 [2024-12-05 12:18:10.578572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.591 [2024-12-05 12:18:10.578589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.591 [2024-12-05 12:18:10.578597] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.592 [2024-12-05 12:18:10.578603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.592 [2024-12-05 12:18:10.578619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.592 qpair failed and we were unable to recover it. 00:34:45.592 [2024-12-05 12:18:10.588625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.592 [2024-12-05 12:18:10.588703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.592 [2024-12-05 12:18:10.588721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.592 [2024-12-05 12:18:10.588728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.592 [2024-12-05 12:18:10.588735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.592 [2024-12-05 12:18:10.588751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.592 qpair failed and we were unable to recover it. 00:34:45.592 [2024-12-05 12:18:10.598617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.592 [2024-12-05 12:18:10.598678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.592 [2024-12-05 12:18:10.598705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.592 [2024-12-05 12:18:10.598713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.592 [2024-12-05 12:18:10.598719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.592 [2024-12-05 12:18:10.598736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.592 qpair failed and we were unable to recover it. 00:34:45.592 [2024-12-05 12:18:10.608552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.592 [2024-12-05 12:18:10.608622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.592 [2024-12-05 12:18:10.608640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.592 [2024-12-05 12:18:10.608648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.592 [2024-12-05 12:18:10.608654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.592 [2024-12-05 12:18:10.608671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.592 qpair failed and we were unable to recover it. 00:34:45.592 [2024-12-05 12:18:10.618674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.592 [2024-12-05 12:18:10.618745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.592 [2024-12-05 12:18:10.618763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.592 [2024-12-05 12:18:10.618771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.592 [2024-12-05 12:18:10.618778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.592 [2024-12-05 12:18:10.618794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.592 qpair failed and we were unable to recover it. 00:34:45.592 [2024-12-05 12:18:10.628809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.592 [2024-12-05 12:18:10.628947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.592 [2024-12-05 12:18:10.628965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.592 [2024-12-05 12:18:10.628972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.592 [2024-12-05 12:18:10.628980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.592 [2024-12-05 12:18:10.628997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.592 qpair failed and we were unable to recover it. 00:34:45.854 [2024-12-05 12:18:10.638770] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.854 [2024-12-05 12:18:10.638837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.854 [2024-12-05 12:18:10.638860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.854 [2024-12-05 12:18:10.638869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.854 [2024-12-05 12:18:10.638882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.854 [2024-12-05 12:18:10.638900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.854 qpair failed and we were unable to recover it. 00:34:45.854 [2024-12-05 12:18:10.648669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.854 [2024-12-05 12:18:10.648742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.854 [2024-12-05 12:18:10.648761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.854 [2024-12-05 12:18:10.648768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.854 [2024-12-05 12:18:10.648775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.854 [2024-12-05 12:18:10.648792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.854 qpair failed and we were unable to recover it. 00:34:45.854 [2024-12-05 12:18:10.658716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.854 [2024-12-05 12:18:10.658784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.658804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.658811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.658818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.658835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.668856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.668943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.668961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.668969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.668975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.668991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.678937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.679035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.679053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.679060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.679066] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.679082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.688875] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.688948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.688970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.688980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.688987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.689005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.698932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.699004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.699028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.699035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.699042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.699061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.708974] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.709050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.709069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.709077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.709084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.709101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.718965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.719037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.719055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.719064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.719070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.719087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.728903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.728983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.729006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.729015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.729021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.729038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.739054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.739126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.739143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.739150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.739156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.739173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.749126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.749238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.749255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.749262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.749269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.749285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.759082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.759150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.759188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.759198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.759205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.759230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.769124] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.769190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.769211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.769218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.769232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.769250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.779147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.779264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.779284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.855 [2024-12-05 12:18:10.779291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.855 [2024-12-05 12:18:10.779298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.855 [2024-12-05 12:18:10.779315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.855 qpair failed and we were unable to recover it. 00:34:45.855 [2024-12-05 12:18:10.789189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.855 [2024-12-05 12:18:10.789259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.855 [2024-12-05 12:18:10.789278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.789285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.789292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.789309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.799191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.799256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.799274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.799282] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.799288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.799305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.809234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.809294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.809312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.809320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.809326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.809344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.819272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.819340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.819360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.819368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.819375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.819393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.829259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.829335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.829352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.829359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.829366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.829382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.839365] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.839487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.839505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.839513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.839519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.839535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.849339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.849406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.849422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.849430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.849436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.849452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.859426] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.859529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.859551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.859558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.859565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.859582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.869440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.869522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.869539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.869546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.869553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.869569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.879447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.879520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.879538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.879545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.879552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.879568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.889500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.889563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.889580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.889587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.889593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.889609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:45.856 [2024-12-05 12:18:10.899520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:45.856 [2024-12-05 12:18:10.899602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:45.856 [2024-12-05 12:18:10.899621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:45.856 [2024-12-05 12:18:10.899629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:45.856 [2024-12-05 12:18:10.899641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:45.856 [2024-12-05 12:18:10.899657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:45.856 qpair failed and we were unable to recover it. 00:34:46.118 [2024-12-05 12:18:10.909542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.118 [2024-12-05 12:18:10.909618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.118 [2024-12-05 12:18:10.909635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.118 [2024-12-05 12:18:10.909642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.118 [2024-12-05 12:18:10.909649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.118 [2024-12-05 12:18:10.909665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.118 qpair failed and we were unable to recover it. 00:34:46.118 [2024-12-05 12:18:10.919553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.118 [2024-12-05 12:18:10.919630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.118 [2024-12-05 12:18:10.919648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.118 [2024-12-05 12:18:10.919655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.118 [2024-12-05 12:18:10.919662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.118 [2024-12-05 12:18:10.919678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.118 qpair failed and we were unable to recover it. 00:34:46.118 [2024-12-05 12:18:10.929452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.118 [2024-12-05 12:18:10.929526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.118 [2024-12-05 12:18:10.929546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.118 [2024-12-05 12:18:10.929555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.118 [2024-12-05 12:18:10.929562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.118 [2024-12-05 12:18:10.929579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.118 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.939646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.939716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.939733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.939741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.939748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.939764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.949719] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.949795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.949813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.949821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.949827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.949844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.959699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.959765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.959783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.959791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.959798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.959815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.969722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.969788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.969804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.969812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.969818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.969834] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.979811] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.979899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.979916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.979923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.979930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.979946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.989793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.989880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.989907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.989921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.989929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.989947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:10.999797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:10.999858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:10.999878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:10.999885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:10.999892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:10.999909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:11.009831] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:11.009900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:11.009917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:11.009924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:11.009931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:11.009947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:11.019877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:11.019944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:11.019961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:11.019969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:11.019975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:11.019991] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:11.029984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:11.030057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:11.030074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:11.030081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:11.030094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:11.030110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:11.039943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:11.040006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:11.040024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:11.040032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:11.040038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:11.040054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:11.049951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:11.050014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:11.050032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:11.050039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:11.050046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:11.050061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.119 [2024-12-05 12:18:11.060007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.119 [2024-12-05 12:18:11.060106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.119 [2024-12-05 12:18:11.060123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.119 [2024-12-05 12:18:11.060130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.119 [2024-12-05 12:18:11.060137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.119 [2024-12-05 12:18:11.060153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.119 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.070052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.070129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.070168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.070177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.070185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.070209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.080020] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.080080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.080101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.080108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.080115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.080134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.090064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.090143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.090161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.090169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.090175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.090192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.100133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.100200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.100218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.100225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.100232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.100248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.110196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.110277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.110294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.110302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.110308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.110324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.120211] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.120273] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.120297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.120305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.120313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.120330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.130153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.130230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.130248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.130255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.130261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.130277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.140260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.140332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.140351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.140359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.140366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.140383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.150303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.150420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.150438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.150445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.150451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.150478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.120 [2024-12-05 12:18:11.160349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.120 [2024-12-05 12:18:11.160461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.120 [2024-12-05 12:18:11.160478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.120 [2024-12-05 12:18:11.160485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.120 [2024-12-05 12:18:11.160498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.120 [2024-12-05 12:18:11.160515] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.120 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.170326] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.170391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.170408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.170416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.170422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.170437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.180375] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.180445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.180469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.180479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.180487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.180504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.190480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.190557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.190574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.190582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.190589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.190606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.200414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.200485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.200504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.200512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.200519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.200535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.210438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.210504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.210525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.210533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.210540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.210557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.220497] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.220563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.220580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.220588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.220594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.220610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.230465] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.230540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.230557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.230564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.230571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.230586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.240544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.240612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.240629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.240636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.240643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.240658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.250598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.250664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.250691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.250699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.250705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.250721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.260607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.260674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.260691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.260699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.260705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.260721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.270704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.383 [2024-12-05 12:18:11.270815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.383 [2024-12-05 12:18:11.270833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.383 [2024-12-05 12:18:11.270840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.383 [2024-12-05 12:18:11.270847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.383 [2024-12-05 12:18:11.270863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.383 qpair failed and we were unable to recover it. 00:34:46.383 [2024-12-05 12:18:11.280670] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.280727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.280744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.280751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.280757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.280774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.290705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.290814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.290830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.290838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.290850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.290867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.300752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.300822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.300839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.300848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.300854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.300869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.310805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.310879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.310897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.310904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.310911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.310927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.320806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.320898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.320917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.320924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.320931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.320947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.330821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.330890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.330906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.330914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.330920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.330935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.340847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.340913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.340930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.340937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.340944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.340959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.350899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.350968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.350985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.350992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.350999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.351015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.360896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.360965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.360983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.360990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.360997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.361013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.370981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.371047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.371064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.371071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.371078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.371094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.380847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.380917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.380939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.380946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.380953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.380969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.391067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.391181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.391198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.391205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.391212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.391227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.401023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.401087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.384 [2024-12-05 12:18:11.401104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.384 [2024-12-05 12:18:11.401111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.384 [2024-12-05 12:18:11.401118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.384 [2024-12-05 12:18:11.401133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.384 qpair failed and we were unable to recover it. 00:34:46.384 [2024-12-05 12:18:11.411038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.384 [2024-12-05 12:18:11.411109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.385 [2024-12-05 12:18:11.411126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.385 [2024-12-05 12:18:11.411134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.385 [2024-12-05 12:18:11.411140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.385 [2024-12-05 12:18:11.411156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.385 qpair failed and we were unable to recover it. 00:34:46.385 [2024-12-05 12:18:11.420983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.385 [2024-12-05 12:18:11.421051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.385 [2024-12-05 12:18:11.421068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.385 [2024-12-05 12:18:11.421075] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.385 [2024-12-05 12:18:11.421087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.385 [2024-12-05 12:18:11.421103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.385 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.431162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.431239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.431257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.431264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.431271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.431286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.441045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.441120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.441158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.441167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.441175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.441199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.451159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.451225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.451248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.451260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.451267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.451287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.461199] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.461274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.461302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.461310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.461317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.461337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.471163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.471237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.471256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.471263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.471270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.471287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.481318] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.481380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.481397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.481404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.481410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.481427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.491275] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.491342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.491360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.491367] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.491373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.491390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.501351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.501420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.501438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.501445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.501451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.501473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.511264] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.511335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.511358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.511365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.511371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.511388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.521403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.649 [2024-12-05 12:18:11.521474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.649 [2024-12-05 12:18:11.521494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.649 [2024-12-05 12:18:11.521502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.649 [2024-12-05 12:18:11.521508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.649 [2024-12-05 12:18:11.521525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.649 qpair failed and we were unable to recover it. 00:34:46.649 [2024-12-05 12:18:11.531435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.531505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.531522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.531530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.531536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.531553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.541401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.541476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.541494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.541501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.541508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.541523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.551461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.551530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.551547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.551555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.551567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.551583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.561389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.561448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.561471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.561478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.561485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.561501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.571431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.571501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.571518] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.571526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.571532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.571549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.581620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.581689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.581706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.581714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.581720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.581737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.591647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.591728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.591746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.591753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.591760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.591776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.601527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.601592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.601610] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.601617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.601624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.601640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.611695] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.611761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.611783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.611791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.611798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.611815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.621634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.621702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.621719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.621727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.621734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.621750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.631805] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.631906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.631928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.631936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.631943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.631961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.650 [2024-12-05 12:18:11.641750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.650 [2024-12-05 12:18:11.641814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.650 [2024-12-05 12:18:11.641837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.650 [2024-12-05 12:18:11.641845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.650 [2024-12-05 12:18:11.641852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.650 [2024-12-05 12:18:11.641868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.650 qpair failed and we were unable to recover it. 00:34:46.651 [2024-12-05 12:18:11.651807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.651 [2024-12-05 12:18:11.651874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.651 [2024-12-05 12:18:11.651894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.651 [2024-12-05 12:18:11.651901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.651 [2024-12-05 12:18:11.651908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.651 [2024-12-05 12:18:11.651926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.651 qpair failed and we were unable to recover it. 00:34:46.651 [2024-12-05 12:18:11.661787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.651 [2024-12-05 12:18:11.661857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.651 [2024-12-05 12:18:11.661874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.651 [2024-12-05 12:18:11.661881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.651 [2024-12-05 12:18:11.661888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.651 [2024-12-05 12:18:11.661904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.651 qpair failed and we were unable to recover it. 00:34:46.651 [2024-12-05 12:18:11.671895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.651 [2024-12-05 12:18:11.671963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.651 [2024-12-05 12:18:11.671980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.651 [2024-12-05 12:18:11.671987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.651 [2024-12-05 12:18:11.671993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.651 [2024-12-05 12:18:11.672009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.651 qpair failed and we were unable to recover it. 00:34:46.651 [2024-12-05 12:18:11.681896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.651 [2024-12-05 12:18:11.681956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.651 [2024-12-05 12:18:11.681974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.651 [2024-12-05 12:18:11.681981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.651 [2024-12-05 12:18:11.681994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.651 [2024-12-05 12:18:11.682010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.651 qpair failed and we were unable to recover it. 00:34:46.651 [2024-12-05 12:18:11.691905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.651 [2024-12-05 12:18:11.691968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.651 [2024-12-05 12:18:11.691986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.651 [2024-12-05 12:18:11.691994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.651 [2024-12-05 12:18:11.692000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.651 [2024-12-05 12:18:11.692016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.651 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.701880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.701943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.701960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.701968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.701974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.701990] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.712023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.712105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.712122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.712129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.712135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.712151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.722012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.722090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.722108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.722115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.722122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.722138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.732053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.732161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.732179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.732186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.732192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.732208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.742077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.742191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.742210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.742217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.742224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.742241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.752136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.752206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.752224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.752231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.752238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.752254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.762026] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.762136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.762164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.762173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.762180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.762199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.772245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.772343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.772368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.772376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.772383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.772401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.782178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.782262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.782280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.782287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.782293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.782310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.792294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.792373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.792392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.915 [2024-12-05 12:18:11.792398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.915 [2024-12-05 12:18:11.792405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.915 [2024-12-05 12:18:11.792421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.915 qpair failed and we were unable to recover it. 00:34:46.915 [2024-12-05 12:18:11.802265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.915 [2024-12-05 12:18:11.802331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.915 [2024-12-05 12:18:11.802349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.802356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.802362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.802379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.812191] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.812255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.812274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.812283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.812296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.812313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.822345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.822419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.822438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.822445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.822451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.822474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.832389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.832466] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.832485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.832493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.832499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.832516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.842274] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.842343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.842363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.842370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.842377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.842393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.852306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.852380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.852398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.852405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.852412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.852428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.862475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.862545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.862562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.862569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.862576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.862592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.872522] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.872592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.872612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.872619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.872626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.872642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.882526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.882595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.882612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.882620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.882626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.882643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.892546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.892613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.892631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.892638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.892644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.892661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.902588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.902655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.902683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.902691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.902697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.902714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.912574] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.912679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.912697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.912704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.912711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.912728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.922642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.922702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.922720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.922727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.922734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.922750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.932680] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.932750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.932767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.932775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.932781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.932798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.942722] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.942823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.942841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.942848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.942860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.942877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:46.916 [2024-12-05 12:18:11.952773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:46.916 [2024-12-05 12:18:11.952849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:46.916 [2024-12-05 12:18:11.952866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:46.916 [2024-12-05 12:18:11.952874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:46.916 [2024-12-05 12:18:11.952880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:46.916 [2024-12-05 12:18:11.952896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:46.916 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:11.962741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:11.962808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:11.962826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:11.962833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:11.962839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:11.962856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:11.972806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:11.972874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:11.972890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:11.972898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:11.972904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:11.972921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:11.982854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:11.982922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:11.982940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:11.982948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:11.982954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:11.982970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:11.992916] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:11.992978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:11.992997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:11.993004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:11.993011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:11.993027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:12.002905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:12.002969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:12.002987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:12.002994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:12.003000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:12.003016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:12.012914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:12.012995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:12.013012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:12.013019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:12.013026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:12.013042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:12.022970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:12.023038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:12.023055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:12.023063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:12.023069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:12.023084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:12.033051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:12.033123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:12.033146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.180 [2024-12-05 12:18:12.033153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.180 [2024-12-05 12:18:12.033160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.180 [2024-12-05 12:18:12.033176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.180 qpair failed and we were unable to recover it. 00:34:47.180 [2024-12-05 12:18:12.043038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.180 [2024-12-05 12:18:12.043126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.180 [2024-12-05 12:18:12.043165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.043174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.043182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.043206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.053118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.053204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.053225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.053233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.053240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.053259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.063064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.063139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.063177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.063186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.063193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.063218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.073137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.073218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.073256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.073265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.073280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.073305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.083145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.083210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.083231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.083239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.083246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.083264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.093208] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.093274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.093295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.093302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.093309] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.093326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.103234] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.103301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.103319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.103327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.103334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.103350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.113145] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.113213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.113231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.113239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.113245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.113262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.123185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.123247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.123267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.123274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.123281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.123298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.133316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.133377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.133395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.133402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.133408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.133425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.143356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.143423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.143440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.143447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.143460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.143478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.153429] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.153503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.153521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.153528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.153535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.153551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.163411] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.163480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.163503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.163510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.181 [2024-12-05 12:18:12.163517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.181 [2024-12-05 12:18:12.163532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.181 qpair failed and we were unable to recover it. 00:34:47.181 [2024-12-05 12:18:12.173312] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.181 [2024-12-05 12:18:12.173376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.181 [2024-12-05 12:18:12.173394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.181 [2024-12-05 12:18:12.173401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.182 [2024-12-05 12:18:12.173408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.182 [2024-12-05 12:18:12.173424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.182 qpair failed and we were unable to recover it. 00:34:47.182 [2024-12-05 12:18:12.183487] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.182 [2024-12-05 12:18:12.183553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.182 [2024-12-05 12:18:12.183572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.182 [2024-12-05 12:18:12.183579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.182 [2024-12-05 12:18:12.183585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.182 [2024-12-05 12:18:12.183602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.182 qpair failed and we were unable to recover it. 00:34:47.182 [2024-12-05 12:18:12.193510] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.182 [2024-12-05 12:18:12.193575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.182 [2024-12-05 12:18:12.193593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.182 [2024-12-05 12:18:12.193601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.182 [2024-12-05 12:18:12.193608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.182 [2024-12-05 12:18:12.193624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.182 qpair failed and we were unable to recover it. 00:34:47.182 [2024-12-05 12:18:12.203413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.182 [2024-12-05 12:18:12.203497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.182 [2024-12-05 12:18:12.203515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.182 [2024-12-05 12:18:12.203522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.182 [2024-12-05 12:18:12.203535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.182 [2024-12-05 12:18:12.203551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.182 qpair failed and we were unable to recover it. 00:34:47.182 [2024-12-05 12:18:12.213538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.182 [2024-12-05 12:18:12.213600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.182 [2024-12-05 12:18:12.213618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.182 [2024-12-05 12:18:12.213625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.182 [2024-12-05 12:18:12.213632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.182 [2024-12-05 12:18:12.213647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.182 qpair failed and we were unable to recover it. 00:34:47.182 [2024-12-05 12:18:12.223567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.182 [2024-12-05 12:18:12.223634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.182 [2024-12-05 12:18:12.223652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.182 [2024-12-05 12:18:12.223659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.182 [2024-12-05 12:18:12.223666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.182 [2024-12-05 12:18:12.223681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.182 qpair failed and we were unable to recover it. 00:34:47.445 [2024-12-05 12:18:12.233684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.445 [2024-12-05 12:18:12.233761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.445 [2024-12-05 12:18:12.233779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.445 [2024-12-05 12:18:12.233787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.445 [2024-12-05 12:18:12.233793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.445 [2024-12-05 12:18:12.233809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.445 qpair failed and we were unable to recover it. 00:34:47.445 [2024-12-05 12:18:12.243623] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.445 [2024-12-05 12:18:12.243687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.445 [2024-12-05 12:18:12.243705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.445 [2024-12-05 12:18:12.243712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.445 [2024-12-05 12:18:12.243719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.445 [2024-12-05 12:18:12.243735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.445 qpair failed and we were unable to recover it. 00:34:47.445 [2024-12-05 12:18:12.253575] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.445 [2024-12-05 12:18:12.253646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.253663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.253671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.253677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.253693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.263602] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.263709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.263727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.263735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.263742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.263758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.273783] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.273848] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.273866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.273873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.273880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.273896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.283772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.283835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.283851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.283859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.283866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.283882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.293827] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.293938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.293969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.293978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.293985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.294004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.303834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.303901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.303922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.303929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.303936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.303953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.313872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.313998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.314017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.314025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.314032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.314048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.323757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.323827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.323845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.323852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.323859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.323875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.333903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.333963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.333982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.333989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.334001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.334018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.343927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.343994] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.344014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.344021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.344027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.344044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.354025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.354103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.354121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.354128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.354136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.354151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.363961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.364026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.364044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.364051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.364058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.364074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.373990] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.374049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.446 [2024-12-05 12:18:12.374066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.446 [2024-12-05 12:18:12.374074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.446 [2024-12-05 12:18:12.374080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.446 [2024-12-05 12:18:12.374096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.446 qpair failed and we were unable to recover it. 00:34:47.446 [2024-12-05 12:18:12.384034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.446 [2024-12-05 12:18:12.384101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.384118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.384126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.384133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.384149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.394118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.394210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.394228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.394235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.394242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.394258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.404116] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.404188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.404207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.404214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.404221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.404238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.414098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.414169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.414208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.414218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.414225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.414251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.424150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.424218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.424246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.424254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.424261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.424280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.434214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.434289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.434309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.434316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.434323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.434340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.444229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.444313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.444331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.444339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.444345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.444362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.454258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.454331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.454349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.454356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.454363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.454379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.464189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.464254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.464273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.464280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.464292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.464309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.474350] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.474421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.474439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.474447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.474458] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.474476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.447 [2024-12-05 12:18:12.484202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.447 [2024-12-05 12:18:12.484263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.447 [2024-12-05 12:18:12.484281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.447 [2024-12-05 12:18:12.484288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.447 [2024-12-05 12:18:12.484294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.447 [2024-12-05 12:18:12.484311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.447 qpair failed and we were unable to recover it. 00:34:47.710 [2024-12-05 12:18:12.494389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.710 [2024-12-05 12:18:12.494486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.710 [2024-12-05 12:18:12.494504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.710 [2024-12-05 12:18:12.494511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.710 [2024-12-05 12:18:12.494518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.710 [2024-12-05 12:18:12.494534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.710 qpair failed and we were unable to recover it. 00:34:47.710 [2024-12-05 12:18:12.504450] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.710 [2024-12-05 12:18:12.504560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.504579] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.504587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.504593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.504610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.514471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.514543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.514562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.514569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.514576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.514592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.524446] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.524521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.524539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.524547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.524553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.524569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.534505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.534563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.534581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.534588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.534595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.534612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.544534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.544599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.544618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.544626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.544632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.544649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.554570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.554639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.554668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.554675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.554682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.554698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.564571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.564633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.564650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.564657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.564664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.564680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.574591] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.574661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.574677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.574684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.574691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.574707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.584658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.584721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.584739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.584746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.584752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.584768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.594660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.594736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.594754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.594761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.594773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.594789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.604669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.604749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.604766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.604773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.604779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.604794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.614732] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.614810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.614826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.614833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.614839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.614855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.624757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.711 [2024-12-05 12:18:12.624868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.711 [2024-12-05 12:18:12.624885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.711 [2024-12-05 12:18:12.624891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.711 [2024-12-05 12:18:12.624898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.711 [2024-12-05 12:18:12.624913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.711 qpair failed and we were unable to recover it. 00:34:47.711 [2024-12-05 12:18:12.634782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.634845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.634865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.634872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.634878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.634894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.644703] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.644757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.644773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.644780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.644786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.644801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.654790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.654845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.654861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.654868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.654875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.654890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.664823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.664884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.664899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.664906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.664913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.664927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.674856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.674922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.674937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.674944] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.674951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.674965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.684826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.684878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.684898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.684905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.684912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.684926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.694887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.694942] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.694957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.694964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.694970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.694985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.704918] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.704973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.704988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.704995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.705001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.705015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.714926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.714979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.714993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.715000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.715006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.715020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.724922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.725023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.725036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.725044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.725050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.725068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.735017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.735075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.735089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.735096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.735102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.735115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.745048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.745135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.745149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.745156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.745162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.745176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.712 [2024-12-05 12:18:12.755044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.712 [2024-12-05 12:18:12.755096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.712 [2024-12-05 12:18:12.755110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.712 [2024-12-05 12:18:12.755117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.712 [2024-12-05 12:18:12.755123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.712 [2024-12-05 12:18:12.755136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.712 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.765044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.765089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.765103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.765110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.765117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.765130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.774993] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.775088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.775102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.775109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.775115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.775129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.785140] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.785199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.785225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.785234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.785241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.785261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.795098] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.795156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.795182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.795191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.795198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.795217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.805123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.805177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.805204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.805212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.805219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.805239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.815220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.815274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.815304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.815313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.815320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.815340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.825290] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.825346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.975 [2024-12-05 12:18:12.825362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.975 [2024-12-05 12:18:12.825369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.975 [2024-12-05 12:18:12.825375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.975 [2024-12-05 12:18:12.825390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.975 qpair failed and we were unable to recover it. 00:34:47.975 [2024-12-05 12:18:12.835237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.975 [2024-12-05 12:18:12.835290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.835303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.835310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.835317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.835331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.845222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.845270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.845283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.845290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.845297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.845310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.855315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.855418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.855431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.855439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.855445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.855470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.865342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.865397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.865411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.865418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.865424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.865437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.875348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.875399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.875412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.875419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.875426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.875439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.885238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.885285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.885299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.885306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.885312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.885325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.895392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.895441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.895458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.895465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.895472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.895485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.905399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.905444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.905462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.905469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.905475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.905489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.915416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.915476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.915490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.915496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.915503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.915516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.925322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.925383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.925396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.925403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.925409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.925423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.935523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.935575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.935588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.935596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.935602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.935615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.945482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.945529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.945545] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.945552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.945559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.945572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.955585] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.955637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.976 [2024-12-05 12:18:12.955650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.976 [2024-12-05 12:18:12.955657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.976 [2024-12-05 12:18:12.955663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.976 [2024-12-05 12:18:12.955677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.976 qpair failed and we were unable to recover it. 00:34:47.976 [2024-12-05 12:18:12.965603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.976 [2024-12-05 12:18:12.965689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.977 [2024-12-05 12:18:12.965702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.977 [2024-12-05 12:18:12.965709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.977 [2024-12-05 12:18:12.965715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.977 [2024-12-05 12:18:12.965729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.977 qpair failed and we were unable to recover it. 00:34:47.977 [2024-12-05 12:18:12.975607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.977 [2024-12-05 12:18:12.975656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.977 [2024-12-05 12:18:12.975671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.977 [2024-12-05 12:18:12.975678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.977 [2024-12-05 12:18:12.975685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.977 [2024-12-05 12:18:12.975699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.977 qpair failed and we were unable to recover it. 00:34:47.977 [2024-12-05 12:18:12.985614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.977 [2024-12-05 12:18:12.985661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.977 [2024-12-05 12:18:12.985674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.977 [2024-12-05 12:18:12.985681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.977 [2024-12-05 12:18:12.985687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.977 [2024-12-05 12:18:12.985704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.977 qpair failed and we were unable to recover it. 00:34:47.977 [2024-12-05 12:18:12.995667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.977 [2024-12-05 12:18:12.995717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.977 [2024-12-05 12:18:12.995730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.977 [2024-12-05 12:18:12.995737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.977 [2024-12-05 12:18:12.995743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.977 [2024-12-05 12:18:12.995757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.977 qpair failed and we were unable to recover it. 00:34:47.977 [2024-12-05 12:18:13.005646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.977 [2024-12-05 12:18:13.005689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.977 [2024-12-05 12:18:13.005702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.977 [2024-12-05 12:18:13.005709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.977 [2024-12-05 12:18:13.005715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.977 [2024-12-05 12:18:13.005728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.977 qpair failed and we were unable to recover it. 00:34:47.977 [2024-12-05 12:18:13.015749] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:47.977 [2024-12-05 12:18:13.015795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:47.977 [2024-12-05 12:18:13.015809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:47.977 [2024-12-05 12:18:13.015816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:47.977 [2024-12-05 12:18:13.015822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:47.977 [2024-12-05 12:18:13.015835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:47.977 qpair failed and we were unable to recover it. 00:34:48.240 [2024-12-05 12:18:13.025735] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.240 [2024-12-05 12:18:13.025780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.240 [2024-12-05 12:18:13.025794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.240 [2024-12-05 12:18:13.025801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.240 [2024-12-05 12:18:13.025808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.240 [2024-12-05 12:18:13.025821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.240 qpair failed and we were unable to recover it. 00:34:48.240 [2024-12-05 12:18:13.035787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.240 [2024-12-05 12:18:13.035830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.240 [2024-12-05 12:18:13.035843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.240 [2024-12-05 12:18:13.035850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.240 [2024-12-05 12:18:13.035856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.240 [2024-12-05 12:18:13.035870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.240 qpair failed and we were unable to recover it. 00:34:48.240 [2024-12-05 12:18:13.045750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.240 [2024-12-05 12:18:13.045793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.240 [2024-12-05 12:18:13.045806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.240 [2024-12-05 12:18:13.045813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.240 [2024-12-05 12:18:13.045819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.045832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.055826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.055879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.055892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.055898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.055905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.055918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.065828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.065876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.065889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.065896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.065902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.065915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.075868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.075915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.075931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.075938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.075945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.075958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.085861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.085908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.085921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.085928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.085934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.085947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.095946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.096000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.096013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.096020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.096026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.096039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.105981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.106061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.106074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.106081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.106088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.106101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.115988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.116063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.116077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.116084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.116090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.116107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.125957] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.126000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.126013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.126020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.126026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.126039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.135926] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.135974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.135987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.135994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.136001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.136014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.146043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.146094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.146107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.146114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.146120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.146133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.155956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.156001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.156015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.156022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.156029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.156043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.166086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.166132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.166146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.166153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.166160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.166173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.241 [2024-12-05 12:18:13.176130] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.241 [2024-12-05 12:18:13.176182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.241 [2024-12-05 12:18:13.176199] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.241 [2024-12-05 12:18:13.176206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.241 [2024-12-05 12:18:13.176213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.241 [2024-12-05 12:18:13.176227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.241 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.186166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.186255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.186280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.186289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.186296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.186315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.196227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.196308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.196325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.196332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.196338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.196353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.206183] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.206227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.206251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.206258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.206265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.206279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.216289] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.216335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.216349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.216356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.216363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.216376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.226258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.226342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.226355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.226362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.226369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.226382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.236153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.236203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.236216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.236223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.236229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.236243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.246272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.246342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.246355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.246362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.246368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.246385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.256237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.256279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.256292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.256299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.256305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.256319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.266323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.266367] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.266380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.266386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.266393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.266406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.276394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.276440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.276453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.276464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.276470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.276484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.242 [2024-12-05 12:18:13.286401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.242 [2024-12-05 12:18:13.286444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.242 [2024-12-05 12:18:13.286460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.242 [2024-12-05 12:18:13.286467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.242 [2024-12-05 12:18:13.286473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.242 [2024-12-05 12:18:13.286487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.242 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.296471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.296520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.296534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.296540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.296547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.296560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.306500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.306546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.306564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.306572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.306578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.306593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.316494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.316547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.316561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.316568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.316575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.316589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.326509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.326574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.326587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.326594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.326601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.326614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.336552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.336603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.336619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.336626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.336633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.336646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.346543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.346589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.346602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.346609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.346616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.346629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.356481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.356531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.356544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.356551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.356557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.356571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.366503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.366547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.366560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.366567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.366573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.366587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.376665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.376715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.376728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.376736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.376742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.376760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.386552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.386599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.386612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.386619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.386626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.386640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.396717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.396781] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.396794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.396802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.396808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.396822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.406747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.406825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.406839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.406845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.504 [2024-12-05 12:18:13.406852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.504 [2024-12-05 12:18:13.406865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.504 qpair failed and we were unable to recover it. 00:34:48.504 [2024-12-05 12:18:13.416728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.504 [2024-12-05 12:18:13.416770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.504 [2024-12-05 12:18:13.416783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.504 [2024-12-05 12:18:13.416790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.416796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.416809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.426775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.426822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.426836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.426843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.426849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.426862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.436871] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.436920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.436934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.436941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.436947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.436960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.446830] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.446875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.446888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.446895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.446901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.446915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.456723] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.456770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.456783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.456790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.456796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.456810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.466885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.466928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.466944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.466951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.466957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.466970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.476923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.476968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.476982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.476988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.476995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.477008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.486954] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.487010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.487023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.487030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.487036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.487050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.496960] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.497004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.497017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.497024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.497030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.497043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.507000] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.507048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.507062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.507069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.507075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.507092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.516986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.517033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.517047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.517054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.517060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.517073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.527055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.527145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.527159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.527166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.527172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.527185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.537067] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.537111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.537124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.537131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.537137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.505 [2024-12-05 12:18:13.537151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.505 qpair failed and we were unable to recover it. 00:34:48.505 [2024-12-05 12:18:13.547093] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.505 [2024-12-05 12:18:13.547177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.505 [2024-12-05 12:18:13.547190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.505 [2024-12-05 12:18:13.547197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.505 [2024-12-05 12:18:13.547206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.506 [2024-12-05 12:18:13.547219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.506 qpair failed and we were unable to recover it. 00:34:48.768 [2024-12-05 12:18:13.557232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.768 [2024-12-05 12:18:13.557298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.768 [2024-12-05 12:18:13.557323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.768 [2024-12-05 12:18:13.557333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.768 [2024-12-05 12:18:13.557340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.768 [2024-12-05 12:18:13.557359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.768 qpair failed and we were unable to recover it. 00:34:48.768 [2024-12-05 12:18:13.567057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.768 [2024-12-05 12:18:13.567109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.768 [2024-12-05 12:18:13.567125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.768 [2024-12-05 12:18:13.567132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.768 [2024-12-05 12:18:13.567139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.768 [2024-12-05 12:18:13.567153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.768 qpair failed and we were unable to recover it. 00:34:48.768 [2024-12-05 12:18:13.577229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.768 [2024-12-05 12:18:13.577274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.768 [2024-12-05 12:18:13.577288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.768 [2024-12-05 12:18:13.577295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.768 [2024-12-05 12:18:13.577302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.768 [2024-12-05 12:18:13.577316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.768 qpair failed and we were unable to recover it. 00:34:48.768 [2024-12-05 12:18:13.587277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.768 [2024-12-05 12:18:13.587323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.768 [2024-12-05 12:18:13.587337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.768 [2024-12-05 12:18:13.587344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.587353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.587367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.597263] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.597311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.597329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.597336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.597343] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.597356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.607260] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.607307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.607320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.607327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.607334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.607347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.617152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.617197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.617210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.617217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.617224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.617237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.627277] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.627325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.627338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.627345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.627352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.627365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.637358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.637407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.637423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.637430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.637436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.637458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.647364] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.647415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.647428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.647435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.647441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.647458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.657382] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.657439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.657452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.657462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.657469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.657482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.667348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.667390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.667403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.667410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.667416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.667430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.677471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.677520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.677534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.677541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.677547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.677560] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.687473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.687513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.687527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.687534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.687540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.687554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.697404] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.697452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.697470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.697477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.697483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.697497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.707514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.707560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.707575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.707582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.707588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.769 [2024-12-05 12:18:13.707602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.769 qpair failed and we were unable to recover it. 00:34:48.769 [2024-12-05 12:18:13.717563] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.769 [2024-12-05 12:18:13.717614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.769 [2024-12-05 12:18:13.717627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.769 [2024-12-05 12:18:13.717634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.769 [2024-12-05 12:18:13.717640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.717654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.727576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.727668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.727685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.727692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.727698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.727712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.737480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.737523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.737537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.737544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.737550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.737564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.747638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.747684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.747697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.747704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.747710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.747724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.757690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.757742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.757755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.757762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.757768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.757782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.767561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.767605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.767618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.767625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.767631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.767648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.777750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.777794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.777807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.777814] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.777820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.777833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.787730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.787807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.787820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.787826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.787833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.787846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.797657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.797704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.797717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.797724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.797731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.797744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:48.770 [2024-12-05 12:18:13.807796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:48.770 [2024-12-05 12:18:13.807839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:48.770 [2024-12-05 12:18:13.807852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:48.770 [2024-12-05 12:18:13.807859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:48.770 [2024-12-05 12:18:13.807865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:48.770 [2024-12-05 12:18:13.807879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:48.770 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.817818] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.817861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.817875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.817882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.817888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.817901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.827843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.827906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.827918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.827925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.827932] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.827946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.837752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.837798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.837811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.837818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.837824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.837838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.847953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.848021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.848034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.848041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.848047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.848060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.857914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.857955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.857977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.857985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.857991] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.858004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.867964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.868011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.868024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.868031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.868037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.868050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.877995] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.878039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.878052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.878059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.878065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.878078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.888017] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.888063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.888076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.888083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.888089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.888103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.898043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.898131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.898145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.898152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.898158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.898175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.908069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.908160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.908174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.908181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.908187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.908201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.918112] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.918209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.918223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.918231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.918237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.918250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.928125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.928208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.928221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.928228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.035 [2024-12-05 12:18:13.928235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.035 [2024-12-05 12:18:13.928248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.035 qpair failed and we were unable to recover it. 00:34:49.035 [2024-12-05 12:18:13.938150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.035 [2024-12-05 12:18:13.938199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.035 [2024-12-05 12:18:13.938212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.035 [2024-12-05 12:18:13.938218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.938225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.938238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:13.948162] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:13.948246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:13.948259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:13.948266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.948272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.948286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:13.958200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:13.958246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:13.958260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:13.958266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.958273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.958286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:13.968220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:13.968261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:13.968274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:13.968281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.968287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.968301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:13.978125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:13.978170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:13.978183] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:13.978190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.978196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.978210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:13.988351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:13.988446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:13.988466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:13.988473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.988480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.988494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:13.998299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:13.998350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:13.998363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:13.998370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:13.998376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:13.998389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.008337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.008381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.008394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.008401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.008408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:14.008421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.018266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.018307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.018321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.018327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.018334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:14.018347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.028376] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.028422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.028436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.028442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.028449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:14.028470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.038431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.038486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.038500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.038507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.038513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:14.038526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.048304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.048346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.048361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.048369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.048376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:14.048391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.058499] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.058568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.058582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.058589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.058596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.036 [2024-12-05 12:18:14.058609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.036 qpair failed and we were unable to recover it. 00:34:49.036 [2024-12-05 12:18:14.068358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.036 [2024-12-05 12:18:14.068407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.036 [2024-12-05 12:18:14.068420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.036 [2024-12-05 12:18:14.068428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.036 [2024-12-05 12:18:14.068434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.037 [2024-12-05 12:18:14.068448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.037 qpair failed and we were unable to recover it. 00:34:49.037 [2024-12-05 12:18:14.078504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.037 [2024-12-05 12:18:14.078556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.037 [2024-12-05 12:18:14.078570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.037 [2024-12-05 12:18:14.078577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.037 [2024-12-05 12:18:14.078583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.037 [2024-12-05 12:18:14.078597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.037 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.088579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.088656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.088669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.088676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.088682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.088696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.098556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.098614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.098628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.098635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.098641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.098654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.108599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.108670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.108684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.108691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.108697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.108710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.118629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.118680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.118697] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.118704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.118711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.118724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.128651] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.128745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.128758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.128765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.128771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.128785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.138676] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.138724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.138737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.138744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.138750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.138763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.148706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.148754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.148768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.148776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.148783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.148797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.300 qpair failed and we were unable to recover it. 00:34:49.300 [2024-12-05 12:18:14.158751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.300 [2024-12-05 12:18:14.158799] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.300 [2024-12-05 12:18:14.158812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.300 [2024-12-05 12:18:14.158819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.300 [2024-12-05 12:18:14.158825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.300 [2024-12-05 12:18:14.158841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.168632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.168674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.168687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.168694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.168701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.168714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.178707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.178753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.178766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.178774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.178780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.178793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.188801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.188849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.188862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.188869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.188876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.188889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.198847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.198929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.198943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.198950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.198956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.198969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.208862] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.208915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.208928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.208935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.208941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.208954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.218899] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.218943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.218957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.218963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.218970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.218983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.228931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.228984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.228997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.229004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.229010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.229023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.238951] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.238999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.239012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.239019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.239026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.239039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.248961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.249006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.249022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.249030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.249036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.249049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.258984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.259035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.259048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.259055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.259061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.259074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.269025] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.269071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.269085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.269092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.269099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.269112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.279014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.279063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.279076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.279083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.279089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.279103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.301 [2024-12-05 12:18:14.289149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.301 [2024-12-05 12:18:14.289195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.301 [2024-12-05 12:18:14.289208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.301 [2024-12-05 12:18:14.289215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.301 [2024-12-05 12:18:14.289221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.301 [2024-12-05 12:18:14.289238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.301 qpair failed and we were unable to recover it. 00:34:49.302 [2024-12-05 12:18:14.298977] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.302 [2024-12-05 12:18:14.299063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.302 [2024-12-05 12:18:14.299078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.302 [2024-12-05 12:18:14.299085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.302 [2024-12-05 12:18:14.299093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.302 [2024-12-05 12:18:14.299108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-12-05 12:18:14.309013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.302 [2024-12-05 12:18:14.309064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.302 [2024-12-05 12:18:14.309078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.302 [2024-12-05 12:18:14.309085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.302 [2024-12-05 12:18:14.309091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.302 [2024-12-05 12:18:14.309105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-12-05 12:18:14.319180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.302 [2024-12-05 12:18:14.319227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.302 [2024-12-05 12:18:14.319240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.302 [2024-12-05 12:18:14.319247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.302 [2024-12-05 12:18:14.319253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.302 [2024-12-05 12:18:14.319267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-12-05 12:18:14.329190] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.302 [2024-12-05 12:18:14.329309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.302 [2024-12-05 12:18:14.329322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.302 [2024-12-05 12:18:14.329329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.302 [2024-12-05 12:18:14.329336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.302 [2024-12-05 12:18:14.329349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.302 [2024-12-05 12:18:14.339212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.302 [2024-12-05 12:18:14.339261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.302 [2024-12-05 12:18:14.339286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.302 [2024-12-05 12:18:14.339294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.302 [2024-12-05 12:18:14.339301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.302 [2024-12-05 12:18:14.339320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.302 qpair failed and we were unable to recover it. 00:34:49.565 [2024-12-05 12:18:14.349115] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.565 [2024-12-05 12:18:14.349178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.565 [2024-12-05 12:18:14.349193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.565 [2024-12-05 12:18:14.349200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.565 [2024-12-05 12:18:14.349207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.565 [2024-12-05 12:18:14.349222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.565 qpair failed and we were unable to recover it. 00:34:49.565 [2024-12-05 12:18:14.359291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.565 [2024-12-05 12:18:14.359343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.565 [2024-12-05 12:18:14.359357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.565 [2024-12-05 12:18:14.359364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.565 [2024-12-05 12:18:14.359370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.565 [2024-12-05 12:18:14.359385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.565 qpair failed and we were unable to recover it. 00:34:49.565 [2024-12-05 12:18:14.369305] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.565 [2024-12-05 12:18:14.369352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.565 [2024-12-05 12:18:14.369366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.565 [2024-12-05 12:18:14.369373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.565 [2024-12-05 12:18:14.369379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.565 [2024-12-05 12:18:14.369393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.565 qpair failed and we were unable to recover it. 00:34:49.565 [2024-12-05 12:18:14.379338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.565 [2024-12-05 12:18:14.379380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.565 [2024-12-05 12:18:14.379394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.565 [2024-12-05 12:18:14.379405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.565 [2024-12-05 12:18:14.379411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.565 [2024-12-05 12:18:14.379424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.565 qpair failed and we were unable to recover it. 00:34:49.565 [2024-12-05 12:18:14.389355] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.565 [2024-12-05 12:18:14.389405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.565 [2024-12-05 12:18:14.389419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.565 [2024-12-05 12:18:14.389426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.565 [2024-12-05 12:18:14.389432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.565 [2024-12-05 12:18:14.389445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.565 qpair failed and we were unable to recover it. 00:34:49.565 [2024-12-05 12:18:14.399396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.565 [2024-12-05 12:18:14.399445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.565 [2024-12-05 12:18:14.399463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.565 [2024-12-05 12:18:14.399469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.399477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.399491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.409390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.409432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.409445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.409452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.409462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.409475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.419441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.419494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.419508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.419515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.419521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.419539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.429471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.429513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.429526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.429533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.429539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.429553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.439509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.439554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.439567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.439574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.439581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.439594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.449526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.449568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.449581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.449588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.449594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.449608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.459544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.459592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.459605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.459612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.459619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.459633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.469592] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.469634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.469648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.469655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.469661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.469675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.479607] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.479664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.479677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.479684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.479690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.479704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.489605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.489648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.489661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.489668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.489674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.489688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.499616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.499658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.499671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.499678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.499684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.499698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.509700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.509746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.509759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.509773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.509779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.509793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.519692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.519741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.519754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.519761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.519767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.519780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.566 qpair failed and we were unable to recover it. 00:34:49.566 [2024-12-05 12:18:14.529714] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.566 [2024-12-05 12:18:14.529758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.566 [2024-12-05 12:18:14.529771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.566 [2024-12-05 12:18:14.529778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.566 [2024-12-05 12:18:14.529784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.566 [2024-12-05 12:18:14.529798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.539756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.539803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.539816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.539823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.539829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.539842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.549781] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.549824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.549837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.549844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.549850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.549867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.559675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.559724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.559737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.559744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.559750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.559764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.569701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.569747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.569760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.569767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.569773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.569786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.579841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.579892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.579905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.579912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.579918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.579931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.589921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.590002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.590015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.590022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.590028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.590041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.599809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.599868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.599881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.599888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.599894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.599907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.567 [2024-12-05 12:18:14.609900] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.567 [2024-12-05 12:18:14.609944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.567 [2024-12-05 12:18:14.609957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.567 [2024-12-05 12:18:14.609964] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.567 [2024-12-05 12:18:14.609971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.567 [2024-12-05 12:18:14.609984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.567 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.619964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.620014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.620027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.620034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.620040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.620053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.629857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.629901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.629914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.629921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.629928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.629941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.640032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.640082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.640097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.640107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.640114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.640128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.650048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.650097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.650112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.650119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.650126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.650140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.660083] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.660129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.660144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.660151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.660157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.660171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.670105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.670153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.670166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.670173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.670180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.670193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.680114] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.680162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.680175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.680183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.680189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.680206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.690157] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.830 [2024-12-05 12:18:14.690222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.830 [2024-12-05 12:18:14.690237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.830 [2024-12-05 12:18:14.690244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.830 [2024-12-05 12:18:14.690250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.830 [2024-12-05 12:18:14.690267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.830 qpair failed and we were unable to recover it. 00:34:49.830 [2024-12-05 12:18:14.700181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.700229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.700243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.700250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.700256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.700270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.710214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.710258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.710272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.710279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.710285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.710298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.720278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.720331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.720356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.720365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.720372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.720391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.730285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.730335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.730351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.730358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.730364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.730379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.740299] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.740386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.740399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.740406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.740412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.740426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.750345] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.750398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.750411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.750418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.750425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.750438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.760368] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.760416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.760429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.760436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.760442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.760459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.770380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.770426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.770440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.770451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.770461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.770475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.780413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.780459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.780473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.780480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.780486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.780500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.790412] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.790458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.790471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.790478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.790484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.790498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.800452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.800500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.800513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.800520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.800527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.800541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.810468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.810510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.810523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.810530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.810536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.810554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.820501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.820555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.820571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.831 [2024-12-05 12:18:14.820578] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.831 [2024-12-05 12:18:14.820584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.831 [2024-12-05 12:18:14.820601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.831 qpair failed and we were unable to recover it. 00:34:49.831 [2024-12-05 12:18:14.830572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.831 [2024-12-05 12:18:14.830621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.831 [2024-12-05 12:18:14.830635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.832 [2024-12-05 12:18:14.830642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.832 [2024-12-05 12:18:14.830649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.832 [2024-12-05 12:18:14.830662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.832 qpair failed and we were unable to recover it. 00:34:49.832 [2024-12-05 12:18:14.840549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.832 [2024-12-05 12:18:14.840600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.832 [2024-12-05 12:18:14.840614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.832 [2024-12-05 12:18:14.840621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.832 [2024-12-05 12:18:14.840627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.832 [2024-12-05 12:18:14.840641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.832 qpair failed and we were unable to recover it. 00:34:49.832 [2024-12-05 12:18:14.850641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.832 [2024-12-05 12:18:14.850682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.832 [2024-12-05 12:18:14.850696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.832 [2024-12-05 12:18:14.850703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.832 [2024-12-05 12:18:14.850709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.832 [2024-12-05 12:18:14.850722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.832 qpair failed and we were unable to recover it. 00:34:49.832 [2024-12-05 12:18:14.860612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.832 [2024-12-05 12:18:14.860660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.832 [2024-12-05 12:18:14.860673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.832 [2024-12-05 12:18:14.860680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.832 [2024-12-05 12:18:14.860687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.832 [2024-12-05 12:18:14.860701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.832 qpair failed and we were unable to recover it. 00:34:49.832 [2024-12-05 12:18:14.870648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:49.832 [2024-12-05 12:18:14.870697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:49.832 [2024-12-05 12:18:14.870711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:49.832 [2024-12-05 12:18:14.870717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:49.832 [2024-12-05 12:18:14.870724] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:49.832 [2024-12-05 12:18:14.870737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:49.832 qpair failed and we were unable to recover it. 00:34:50.095 [2024-12-05 12:18:14.880694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.095 [2024-12-05 12:18:14.880743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.095 [2024-12-05 12:18:14.880756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.095 [2024-12-05 12:18:14.880763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.095 [2024-12-05 12:18:14.880769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.095 [2024-12-05 12:18:14.880783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.095 qpair failed and we were unable to recover it. 00:34:50.095 [2024-12-05 12:18:14.890699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.095 [2024-12-05 12:18:14.890744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.095 [2024-12-05 12:18:14.890757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.095 [2024-12-05 12:18:14.890764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.095 [2024-12-05 12:18:14.890771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.095 [2024-12-05 12:18:14.890784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.095 qpair failed and we were unable to recover it. 00:34:50.095 [2024-12-05 12:18:14.900763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.095 [2024-12-05 12:18:14.900849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.095 [2024-12-05 12:18:14.900863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.095 [2024-12-05 12:18:14.900875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.095 [2024-12-05 12:18:14.900882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.095 [2024-12-05 12:18:14.900897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.095 qpair failed and we were unable to recover it. 00:34:50.095 [2024-12-05 12:18:14.910766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.095 [2024-12-05 12:18:14.910811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.095 [2024-12-05 12:18:14.910824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.095 [2024-12-05 12:18:14.910831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.095 [2024-12-05 12:18:14.910837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.095 [2024-12-05 12:18:14.910852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.095 qpair failed and we were unable to recover it. 00:34:50.095 [2024-12-05 12:18:14.920779] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.095 [2024-12-05 12:18:14.920836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.095 [2024-12-05 12:18:14.920849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.095 [2024-12-05 12:18:14.920857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.095 [2024-12-05 12:18:14.920863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.095 [2024-12-05 12:18:14.920876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.095 qpair failed and we were unable to recover it. 00:34:50.095 [2024-12-05 12:18:14.930788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.095 [2024-12-05 12:18:14.930832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.095 [2024-12-05 12:18:14.930845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.095 [2024-12-05 12:18:14.930852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.930859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.930872] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:14.940819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:14.940871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:14.940884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:14.940891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.940897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.940914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:14.950864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:14.950910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:14.950923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:14.950930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.950936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.950949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:14.960909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:14.960958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:14.960970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:14.960977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.960984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.960997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:14.970907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:14.970949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:14.970962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:14.970969] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.970975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.970988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:14.980940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:14.980983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:14.980996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:14.981003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.981009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.981022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:14.990994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:14.991080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:14.991093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:14.991100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:14.991106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:14.991119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:15.001018] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:15.001064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:15.001077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:15.001084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:15.001090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:15.001104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:15.010930] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:15.010977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:15.010990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:15.010997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:15.011003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:15.011016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:15.020932] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:15.020977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:15.020990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:15.020997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:15.021003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:15.021017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:15.030979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:15.031024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:15.031037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:15.031048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:15.031054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:15.031068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:15.041123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:15.041209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:15.041222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:15.041228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:15.041235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:15.041248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.096 [2024-12-05 12:18:15.051131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.096 [2024-12-05 12:18:15.051178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.096 [2024-12-05 12:18:15.051191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.096 [2024-12-05 12:18:15.051198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.096 [2024-12-05 12:18:15.051204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.096 [2024-12-05 12:18:15.051217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.096 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.061139] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.061189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.061203] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.061209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.061216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.061229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.071175] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.071226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.071239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.071246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.071252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.071269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.081160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.081204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.081218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.081225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.081231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.081244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.091223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.091263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.091276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.091283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.091289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.091302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.101267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.101312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.101325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.101332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.101338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.101351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.111297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.111343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.111357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.111363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.111370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.111383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.121250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.121298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.121311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.121318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.121325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.121338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.131219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.131265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.131279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.131286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.131292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.131306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.097 [2024-12-05 12:18:15.141374] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.097 [2024-12-05 12:18:15.141417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.097 [2024-12-05 12:18:15.141430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.097 [2024-12-05 12:18:15.141437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.097 [2024-12-05 12:18:15.141443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.097 [2024-12-05 12:18:15.141460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.097 qpair failed and we were unable to recover it. 00:34:50.359 [2024-12-05 12:18:15.151301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.359 [2024-12-05 12:18:15.151348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.359 [2024-12-05 12:18:15.151361] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.359 [2024-12-05 12:18:15.151368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.359 [2024-12-05 12:18:15.151375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.359 [2024-12-05 12:18:15.151389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.359 qpair failed and we were unable to recover it. 00:34:50.359 [2024-12-05 12:18:15.161425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.359 [2024-12-05 12:18:15.161475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.359 [2024-12-05 12:18:15.161489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.359 [2024-12-05 12:18:15.161499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.359 [2024-12-05 12:18:15.161506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.359 [2024-12-05 12:18:15.161520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.359 qpair failed and we were unable to recover it. 00:34:50.359 [2024-12-05 12:18:15.171449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.359 [2024-12-05 12:18:15.171552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.359 [2024-12-05 12:18:15.171565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.359 [2024-12-05 12:18:15.171572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.359 [2024-12-05 12:18:15.171578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.359 [2024-12-05 12:18:15.171591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.359 qpair failed and we were unable to recover it. 00:34:50.359 [2024-12-05 12:18:15.181493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.359 [2024-12-05 12:18:15.181583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.359 [2024-12-05 12:18:15.181596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.359 [2024-12-05 12:18:15.181603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.359 [2024-12-05 12:18:15.181610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.359 [2024-12-05 12:18:15.181623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.359 qpair failed and we were unable to recover it. 00:34:50.359 [2024-12-05 12:18:15.191494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.359 [2024-12-05 12:18:15.191543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.359 [2024-12-05 12:18:15.191555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.359 [2024-12-05 12:18:15.191562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.191568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.191581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.201543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.201592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.201605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.201612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.201619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.201636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.211423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.211471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.211484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.211491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.211497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.211510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.221462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.221539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.221554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.221562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.221568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.221582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.231641] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.231728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.231741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.231748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.231755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.231768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.241640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.241684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.241698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.241705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.241711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.241725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.251539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.251585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.251598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.251605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.251611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.251625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.261706] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.261754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.261767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.261774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.261780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.261793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.271728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.271778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.271791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.271797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.271804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.271817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.281804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.281851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.281868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.281875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.281881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.281895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.291788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.291829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.291843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.291853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.291860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.291873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.301674] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.301716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.301729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.301736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.301742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.301755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.311846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.360 [2024-12-05 12:18:15.311913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.360 [2024-12-05 12:18:15.311926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.360 [2024-12-05 12:18:15.311933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.360 [2024-12-05 12:18:15.311939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.360 [2024-12-05 12:18:15.311952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.360 qpair failed and we were unable to recover it. 00:34:50.360 [2024-12-05 12:18:15.321863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.321915] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.321928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.321935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.321941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.321955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.331887] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.331932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.331945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.331952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.331958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.331975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.341924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.341970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.341983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.341990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.341996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.342009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.351928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.351973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.351986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.351993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.351999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.352012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.361979] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.362026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.362040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.362047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.362053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.362066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.371948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.371991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.372004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.372011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.372017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.372030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.381994] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.382048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.382061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.382068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.382075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.382088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.392045] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.392092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.392105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.392111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.392117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.392131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.361 [2024-12-05 12:18:15.402076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.361 [2024-12-05 12:18:15.402132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.361 [2024-12-05 12:18:15.402145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.361 [2024-12-05 12:18:15.402151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.361 [2024-12-05 12:18:15.402158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.361 [2024-12-05 12:18:15.402171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.361 qpair failed and we were unable to recover it. 00:34:50.621 [2024-12-05 12:18:15.412092] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.621 [2024-12-05 12:18:15.412144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.621 [2024-12-05 12:18:15.412159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.621 [2024-12-05 12:18:15.412166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.621 [2024-12-05 12:18:15.412174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.621 [2024-12-05 12:18:15.412188] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.621 qpair failed and we were unable to recover it. 00:34:50.621 [2024-12-05 12:18:15.422127] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.621 [2024-12-05 12:18:15.422183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.621 [2024-12-05 12:18:15.422208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.621 [2024-12-05 12:18:15.422225] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.621 [2024-12-05 12:18:15.422232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.621 [2024-12-05 12:18:15.422251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.621 qpair failed and we were unable to recover it. 00:34:50.621 [2024-12-05 12:18:15.432151] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.621 [2024-12-05 12:18:15.432205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.621 [2024-12-05 12:18:15.432229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.621 [2024-12-05 12:18:15.432239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.621 [2024-12-05 12:18:15.432246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.621 [2024-12-05 12:18:15.432265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.621 qpair failed and we were unable to recover it. 00:34:50.621 [2024-12-05 12:18:15.442061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.621 [2024-12-05 12:18:15.442109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.621 [2024-12-05 12:18:15.442124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.621 [2024-12-05 12:18:15.442131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.621 [2024-12-05 12:18:15.442138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.621 [2024-12-05 12:18:15.442153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.621 qpair failed and we were unable to recover it. 00:34:50.621 [2024-12-05 12:18:15.452209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.621 [2024-12-05 12:18:15.452262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.621 [2024-12-05 12:18:15.452287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.452296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.452303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.452322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.462242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.622 [2024-12-05 12:18:15.462291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.622 [2024-12-05 12:18:15.462306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.462314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.462320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.462340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.472276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.622 [2024-12-05 12:18:15.472323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.622 [2024-12-05 12:18:15.472337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.472344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.472351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.472364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.482271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.622 [2024-12-05 12:18:15.482316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.622 [2024-12-05 12:18:15.482330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.482337] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.482344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.482357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.492331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.622 [2024-12-05 12:18:15.492392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.622 [2024-12-05 12:18:15.492405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.492412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.492418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.492432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.502340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.622 [2024-12-05 12:18:15.502381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.622 [2024-12-05 12:18:15.502395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.502401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.502408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.502421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.512372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:50.622 [2024-12-05 12:18:15.512423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:50.622 [2024-12-05 12:18:15.512437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:50.622 [2024-12-05 12:18:15.512444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:50.622 [2024-12-05 12:18:15.512450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x8d40c0 00:34:50.622 [2024-12-05 12:18:15.512469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:34:50.622 qpair failed and we were unable to recover it. 00:34:50.622 [2024-12-05 12:18:15.512561] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:34:50.622 A controller has encountered a failure and is being reset. 00:34:50.622 Controller properly reset. 00:34:50.622 Initializing NVMe Controllers 00:34:50.622 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:50.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:50.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:50.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:50.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:50.622 Initialization complete. Launching workers. 00:34:50.622 Starting thread on core 1 00:34:50.622 Starting thread on core 2 00:34:50.622 Starting thread on core 3 00:34:50.622 Starting thread on core 0 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:50.622 00:34:50.622 real 0m11.352s 00:34:50.622 user 0m22.095s 00:34:50.622 sys 0m3.810s 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:50.622 ************************************ 00:34:50.622 END TEST nvmf_target_disconnect_tc2 00:34:50.622 ************************************ 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:50.622 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:50.622 rmmod nvme_tcp 00:34:50.890 rmmod nvme_fabrics 00:34:50.890 rmmod nvme_keyring 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 1554624 ']' 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 1554624 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1554624 ']' 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1554624 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554624 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554624' 00:34:50.890 killing process with pid 1554624 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1554624 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1554624 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@254 -- # local dev 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:50.890 12:18:15 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # return 0 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@274 -- # iptr 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:34:53.433 00:34:53.433 real 0m21.861s 00:34:53.433 user 0m49.543s 00:34:53.433 sys 0m10.108s 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.433 12:18:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:53.433 ************************************ 00:34:53.433 END TEST nvmf_target_disconnect 00:34:53.433 ************************************ 00:34:53.433 12:18:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:53.433 00:34:53.433 real 6m28.505s 00:34:53.433 user 11m18.750s 00:34:53.433 sys 2m16.023s 00:34:53.433 12:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:53.433 12:18:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.433 ************************************ 00:34:53.433 END TEST nvmf_host 00:34:53.433 ************************************ 00:34:53.433 12:18:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:34:53.433 12:18:18 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:34:53.433 12:18:18 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:53.433 12:18:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:53.433 12:18:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.433 12:18:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:53.433 ************************************ 00:34:53.433 START TEST nvmf_target_core_interrupt_mode 00:34:53.433 ************************************ 00:34:53.433 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:34:53.433 * Looking for test storage... 00:34:53.433 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:34:53.433 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.433 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.434 --rc genhtml_branch_coverage=1 00:34:53.434 --rc genhtml_function_coverage=1 00:34:53.434 --rc genhtml_legend=1 00:34:53.434 --rc geninfo_all_blocks=1 00:34:53.434 --rc geninfo_unexecuted_blocks=1 00:34:53.434 00:34:53.434 ' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.434 --rc genhtml_branch_coverage=1 00:34:53.434 --rc genhtml_function_coverage=1 00:34:53.434 --rc genhtml_legend=1 00:34:53.434 --rc geninfo_all_blocks=1 00:34:53.434 --rc geninfo_unexecuted_blocks=1 00:34:53.434 00:34:53.434 ' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.434 --rc genhtml_branch_coverage=1 00:34:53.434 --rc genhtml_function_coverage=1 00:34:53.434 --rc genhtml_legend=1 00:34:53.434 --rc geninfo_all_blocks=1 00:34:53.434 --rc geninfo_unexecuted_blocks=1 00:34:53.434 00:34:53.434 ' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.434 --rc genhtml_branch_coverage=1 00:34:53.434 --rc genhtml_function_coverage=1 00:34:53.434 --rc genhtml_legend=1 00:34:53.434 --rc geninfo_all_blocks=1 00:34:53.434 --rc geninfo_unexecuted_blocks=1 00:34:53.434 00:34:53.434 ' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:34:53.434 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:34:53.435 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:53.435 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:53.435 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:53.435 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:53.435 ************************************ 00:34:53.435 START TEST nvmf_abort 00:34:53.435 ************************************ 00:34:53.435 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:34:53.695 * Looking for test storage... 00:34:53.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.695 --rc genhtml_branch_coverage=1 00:34:53.695 --rc genhtml_function_coverage=1 00:34:53.695 --rc genhtml_legend=1 00:34:53.695 --rc geninfo_all_blocks=1 00:34:53.695 --rc geninfo_unexecuted_blocks=1 00:34:53.695 00:34:53.695 ' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.695 --rc genhtml_branch_coverage=1 00:34:53.695 --rc genhtml_function_coverage=1 00:34:53.695 --rc genhtml_legend=1 00:34:53.695 --rc geninfo_all_blocks=1 00:34:53.695 --rc geninfo_unexecuted_blocks=1 00:34:53.695 00:34:53.695 ' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.695 --rc genhtml_branch_coverage=1 00:34:53.695 --rc genhtml_function_coverage=1 00:34:53.695 --rc genhtml_legend=1 00:34:53.695 --rc geninfo_all_blocks=1 00:34:53.695 --rc geninfo_unexecuted_blocks=1 00:34:53.695 00:34:53.695 ' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:53.695 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:53.695 --rc genhtml_branch_coverage=1 00:34:53.695 --rc genhtml_function_coverage=1 00:34:53.695 --rc genhtml_legend=1 00:34:53.695 --rc geninfo_all_blocks=1 00:34:53.695 --rc geninfo_unexecuted_blocks=1 00:34:53.695 00:34:53.695 ' 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:53.695 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:34:53.696 12:18:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:01.834 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:01.835 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:01.835 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:01.835 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:01.835 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:01.835 10.0.0.1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:01.835 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:01.836 10.0.0.2 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:01.836 12:18:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:01.836 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.836 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.568 ms 00:35:01.836 00:35:01.836 --- 10.0.0.1 ping statistics --- 00:35:01.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.836 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:01.836 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.836 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:35:01.836 00:35:01.836 --- 10.0.0.2 ping statistics --- 00:35:01.836 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.836 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:01.836 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:01.837 ' 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=1560467 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 1560467 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1560467 ']' 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:01.837 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:01.837 12:18:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:01.837 [2024-12-05 12:18:26.344748] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:01.837 [2024-12-05 12:18:26.345884] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:35:01.837 [2024-12-05 12:18:26.345932] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:01.837 [2024-12-05 12:18:26.445842] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:01.837 [2024-12-05 12:18:26.498937] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:01.837 [2024-12-05 12:18:26.498982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:01.837 [2024-12-05 12:18:26.498991] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:01.837 [2024-12-05 12:18:26.498998] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:01.837 [2024-12-05 12:18:26.499004] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:01.837 [2024-12-05 12:18:26.500884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:01.837 [2024-12-05 12:18:26.501046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:01.837 [2024-12-05 12:18:26.501048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:01.837 [2024-12-05 12:18:26.578563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:01.837 [2024-12-05 12:18:26.579799] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:01.837 [2024-12-05 12:18:26.579999] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:01.837 [2024-12-05 12:18:26.580177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 [2024-12-05 12:18:27.205956] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 Malloc0 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 Delay0 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 [2024-12-05 12:18:27.305871] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.411 12:18:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:35:02.411 [2024-12-05 12:18:27.448170] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:35:04.951 Initializing NVMe Controllers 00:35:04.951 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:35:04.951 controller IO queue size 128 less than required 00:35:04.951 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:35:04.951 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:35:04.951 Initialization complete. Launching workers. 00:35:04.951 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28651 00:35:04.951 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28712, failed to submit 66 00:35:04.951 success 28651, unsuccessful 61, failed 0 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:35:04.951 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:04.952 rmmod nvme_tcp 00:35:04.952 rmmod nvme_fabrics 00:35:04.952 rmmod nvme_keyring 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 1560467 ']' 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 1560467 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1560467 ']' 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1560467 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1560467 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1560467' 00:35:04.952 killing process with pid 1560467 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1560467 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1560467 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:04.952 12:18:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:07.509 12:18:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:07.509 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:35:07.509 00:35:07.509 real 0m13.634s 00:35:07.509 user 0m11.204s 00:35:07.509 sys 0m7.225s 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:35:07.510 ************************************ 00:35:07.510 END TEST nvmf_abort 00:35:07.510 ************************************ 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:07.510 ************************************ 00:35:07.510 START TEST nvmf_ns_hotplug_stress 00:35:07.510 ************************************ 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:35:07.510 * Looking for test storage... 00:35:07.510 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.510 --rc genhtml_branch_coverage=1 00:35:07.510 --rc genhtml_function_coverage=1 00:35:07.510 --rc genhtml_legend=1 00:35:07.510 --rc geninfo_all_blocks=1 00:35:07.510 --rc geninfo_unexecuted_blocks=1 00:35:07.510 00:35:07.510 ' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.510 --rc genhtml_branch_coverage=1 00:35:07.510 --rc genhtml_function_coverage=1 00:35:07.510 --rc genhtml_legend=1 00:35:07.510 --rc geninfo_all_blocks=1 00:35:07.510 --rc geninfo_unexecuted_blocks=1 00:35:07.510 00:35:07.510 ' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.510 --rc genhtml_branch_coverage=1 00:35:07.510 --rc genhtml_function_coverage=1 00:35:07.510 --rc genhtml_legend=1 00:35:07.510 --rc geninfo_all_blocks=1 00:35:07.510 --rc geninfo_unexecuted_blocks=1 00:35:07.510 00:35:07.510 ' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:07.510 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:07.510 --rc genhtml_branch_coverage=1 00:35:07.510 --rc genhtml_function_coverage=1 00:35:07.510 --rc genhtml_legend=1 00:35:07.510 --rc geninfo_all_blocks=1 00:35:07.510 --rc geninfo_unexecuted_blocks=1 00:35:07.510 00:35:07.510 ' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.510 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:35:07.511 12:18:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:35:15.648 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:35:15.648 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:35:15.648 Found net devices under 0000:4b:00.0: cvl_0_0 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:35:15.648 Found net devices under 0000:4b:00.1: cvl_0_1 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:15.648 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:15.649 10.0.0.1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:15.649 10.0.0.2 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:15.649 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:15.650 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.650 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.624 ms 00:35:15.650 00:35:15.650 --- 10.0.0.1 ping statistics --- 00:35:15.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.650 rtt min/avg/max/mdev = 0.624/0.624/0.624/0.000 ms 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:15.650 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.650 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:35:15.650 00:35:15.650 --- 10.0.0.2 ping statistics --- 00:35:15.650 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.650 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:15.650 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:35:15.651 ' 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:15.651 12:18:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=1565489 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 1565489 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1565489 ']' 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.651 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:15.651 [2024-12-05 12:18:40.088313] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:15.651 [2024-12-05 12:18:40.089476] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:35:15.651 [2024-12-05 12:18:40.089540] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.651 [2024-12-05 12:18:40.192981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:15.651 [2024-12-05 12:18:40.245355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.651 [2024-12-05 12:18:40.245404] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.651 [2024-12-05 12:18:40.245414] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.651 [2024-12-05 12:18:40.245421] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.651 [2024-12-05 12:18:40.245427] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.651 [2024-12-05 12:18:40.247527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:15.651 [2024-12-05 12:18:40.247692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:15.651 [2024-12-05 12:18:40.247693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.651 [2024-12-05 12:18:40.326762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:15.651 [2024-12-05 12:18:40.327847] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:15.651 [2024-12-05 12:18:40.328241] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:15.651 [2024-12-05 12:18:40.328397] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:15.911 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.911 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:35:15.911 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:15.912 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:15.912 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:15.912 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:15.912 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:35:15.912 12:18:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:35:16.172 [2024-12-05 12:18:41.128732] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.172 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:35:16.431 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.690 [2024-12-05 12:18:41.497619] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.690 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:35:16.690 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:35:16.949 Malloc0 00:35:16.949 12:18:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:35:17.209 Delay0 00:35:17.209 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:17.468 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:35:17.468 NULL1 00:35:17.468 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:35:17.728 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1565860 00:35:17.728 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:17.728 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:35:17.728 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:17.989 12:18:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:18.248 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:35:18.248 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:35:18.248 true 00:35:18.248 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:18.248 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:18.509 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:18.770 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:35:18.770 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:35:19.031 true 00:35:19.031 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:19.031 12:18:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.291 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:19.291 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:35:19.291 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:35:19.551 true 00:35:19.551 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:19.551 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:19.812 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:20.074 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:35:20.074 12:18:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:35:20.074 true 00:35:20.074 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:20.074 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:20.333 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:20.593 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:35:20.593 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:35:20.593 true 00:35:20.853 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:20.853 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:20.853 12:18:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:21.112 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:35:21.112 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:35:21.371 true 00:35:21.371 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:21.372 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:21.372 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:21.631 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:35:21.631 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:35:21.896 true 00:35:21.896 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:21.896 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:21.896 12:18:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:22.307 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:35:22.307 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:35:22.307 true 00:35:22.307 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:22.307 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:22.568 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:22.829 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:35:22.829 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:35:22.829 true 00:35:22.829 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:22.829 12:18:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:23.088 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:23.349 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:35:23.349 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:35:23.349 true 00:35:23.610 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:23.610 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:23.610 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:23.869 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:35:23.869 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:35:24.128 true 00:35:24.128 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:24.128 12:18:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:24.128 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:24.387 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:35:24.387 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:35:24.646 true 00:35:24.646 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:24.646 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:24.646 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:24.905 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:35:24.905 12:18:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:35:25.182 true 00:35:25.182 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:25.182 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:25.440 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:25.441 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:35:25.441 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:35:25.700 true 00:35:25.700 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:25.700 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:25.959 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:25.959 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:35:25.959 12:18:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:35:26.218 true 00:35:26.218 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:26.218 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:26.478 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:26.737 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:35:26.737 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:35:26.737 true 00:35:26.737 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:26.737 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:26.995 12:18:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:27.255 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:35:27.255 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:35:27.255 true 00:35:27.255 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:27.255 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:27.514 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:27.774 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:35:27.774 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:35:28.035 true 00:35:28.035 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:28.035 12:18:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:28.035 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:28.295 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:35:28.295 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:35:28.555 true 00:35:28.555 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:28.555 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:28.815 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:28.815 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:35:28.815 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:35:29.075 true 00:35:29.075 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:29.075 12:18:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.335 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:29.335 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:35:29.335 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:35:29.595 true 00:35:29.595 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:29.595 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:29.856 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:29.856 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:35:29.856 12:18:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:35:30.117 true 00:35:30.117 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:30.117 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:30.382 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:30.644 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:35:30.644 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:35:30.644 true 00:35:30.644 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:30.644 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:30.904 12:18:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:31.163 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:35:31.164 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:35:31.164 true 00:35:31.423 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:31.423 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.423 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:31.684 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:35:31.684 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:35:31.947 true 00:35:31.947 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:31.947 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:31.947 12:18:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:32.208 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:35:32.208 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:35:32.470 true 00:35:32.470 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:32.470 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:32.470 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:32.731 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:35:32.731 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:35:32.992 true 00:35:32.992 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:32.992 12:18:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:33.252 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:33.252 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:35:33.252 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:35:33.512 true 00:35:33.512 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:33.512 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:33.772 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:33.772 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:35:33.772 12:18:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:35:34.034 true 00:35:34.034 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:34.034 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:34.294 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:34.555 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:35:34.555 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:35:34.555 true 00:35:34.555 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:34.555 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:34.815 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:35.075 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:35:35.075 12:18:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:35:35.075 true 00:35:35.075 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:35.075 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:35.336 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:35.597 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:35:35.597 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:35:35.597 true 00:35:35.858 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:35.858 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:35.858 12:19:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:36.120 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:35:36.120 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:35:36.381 true 00:35:36.381 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:36.381 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:36.381 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:36.642 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:35:36.642 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:35:36.902 true 00:35:36.902 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:36.902 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:37.163 12:19:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:37.163 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:35:37.163 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:35:37.424 true 00:35:37.424 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:37.424 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:37.685 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:37.685 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:35:37.685 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:35:37.945 true 00:35:37.945 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:37.945 12:19:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:38.206 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:38.466 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:35:38.466 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:35:38.466 true 00:35:38.466 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:38.466 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:38.725 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:38.985 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:35:38.985 12:19:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:35:38.985 true 00:35:38.985 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:38.985 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:39.245 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:39.506 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:35:39.506 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:35:39.506 true 00:35:39.766 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:39.766 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:39.766 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:40.026 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:35:40.026 12:19:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:35:40.286 true 00:35:40.286 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:40.286 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:40.286 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:40.545 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:35:40.545 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:35:40.805 true 00:35:40.805 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:40.805 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.066 12:19:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:41.066 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:35:41.066 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:35:41.327 true 00:35:41.327 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:41.327 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:41.588 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:41.588 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:35:41.588 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:35:41.848 true 00:35:41.848 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:41.848 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:42.108 12:19:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:42.368 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:35:42.368 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:35:42.368 true 00:35:42.368 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:42.368 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:42.629 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:42.891 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:35:42.891 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:35:42.891 true 00:35:42.891 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:42.891 12:19:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.153 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:43.414 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:35:43.414 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:35:43.414 true 00:35:43.673 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:43.673 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:43.673 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:43.934 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:35:43.934 12:19:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:35:44.194 true 00:35:44.194 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:44.194 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:44.194 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:44.455 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:35:44.455 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:35:44.716 true 00:35:44.716 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:44.716 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:44.976 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:44.976 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:35:44.976 12:19:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:35:45.236 true 00:35:45.236 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:45.236 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:45.497 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:45.497 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:35:45.497 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:35:45.756 true 00:35:45.756 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:45.756 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:46.017 12:19:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:46.278 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:35:46.278 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:35:46.278 true 00:35:46.278 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:46.278 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:46.538 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:46.797 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1052 00:35:46.797 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:35:46.797 true 00:35:46.797 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:46.797 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:47.056 12:19:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:47.316 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1053 00:35:47.316 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:35:47.316 true 00:35:47.316 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:47.316 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:47.575 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:47.834 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1054 00:35:47.834 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:35:48.094 true 00:35:48.094 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:48.094 12:19:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:48.094 Initializing NVMe Controllers 00:35:48.094 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:48.094 Controller IO queue size 128, less than required. 00:35:48.094 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:48.094 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:35:48.094 Initialization complete. Launching workers. 00:35:48.094 ======================================================== 00:35:48.094 Latency(us) 00:35:48.094 Device Information : IOPS MiB/s Average min max 00:35:48.094 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 30604.03 14.94 4182.55 1127.27 11154.93 00:35:48.094 ======================================================== 00:35:48.094 Total : 30604.03 14.94 4182.55 1127.27 11154.93 00:35:48.094 00:35:48.094 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:35:48.371 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1055 00:35:48.372 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:35:48.372 true 00:35:48.372 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1565860 00:35:48.372 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1565860) - No such process 00:35:48.372 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1565860 00:35:48.372 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:48.630 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:48.890 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:35:48.890 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:35:48.890 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:35:48.890 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:48.890 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:35:48.890 null0 00:35:49.149 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.149 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.149 12:19:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:35:49.149 null1 00:35:49.149 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.149 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.149 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:35:49.408 null2 00:35:49.408 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.408 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.408 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:35:49.408 null3 00:35:49.667 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.667 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.667 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:35:49.667 null4 00:35:49.667 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.667 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.667 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:35:49.927 null5 00:35:49.927 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.927 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.927 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:35:49.927 null6 00:35:49.927 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:49.927 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:49.927 12:19:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:35:50.186 null7 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1572356 1572357 1572359 1572361 1572363 1572365 1572367 1572368 00:35:50.186 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:50.187 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:35:50.187 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:35:50.187 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:35:50.187 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.187 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:50.445 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:50.704 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.964 12:19:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:50.964 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:50.964 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:50.964 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:51.223 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:51.224 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:51.484 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.484 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.484 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:51.484 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.484 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.484 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:51.485 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.745 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:51.746 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:52.006 12:19:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.266 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.526 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:52.786 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:53.047 12:19:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.047 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:53.307 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.308 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.308 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:53.308 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.308 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.308 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:35:53.568 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:35:53.829 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:35:54.090 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:54.090 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:54.090 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:54.090 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:54.090 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:54.090 12:19:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:54.090 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:54.090 rmmod nvme_tcp 00:35:54.090 rmmod nvme_fabrics 00:35:54.350 rmmod nvme_keyring 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 1565489 ']' 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 1565489 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1565489 ']' 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1565489 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1565489 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1565489' 00:35:54.350 killing process with pid 1565489 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1565489 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1565489 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:54.350 12:19:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:56.899 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:35:56.900 00:35:56.900 real 0m49.352s 00:35:56.900 user 3m4.214s 00:35:56.900 sys 0m22.122s 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:35:56.900 ************************************ 00:35:56.900 END TEST nvmf_ns_hotplug_stress 00:35:56.900 ************************************ 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:56.900 ************************************ 00:35:56.900 START TEST nvmf_delete_subsystem 00:35:56.900 ************************************ 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:35:56.900 * Looking for test storage... 00:35:56.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.900 --rc genhtml_branch_coverage=1 00:35:56.900 --rc genhtml_function_coverage=1 00:35:56.900 --rc genhtml_legend=1 00:35:56.900 --rc geninfo_all_blocks=1 00:35:56.900 --rc geninfo_unexecuted_blocks=1 00:35:56.900 00:35:56.900 ' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.900 --rc genhtml_branch_coverage=1 00:35:56.900 --rc genhtml_function_coverage=1 00:35:56.900 --rc genhtml_legend=1 00:35:56.900 --rc geninfo_all_blocks=1 00:35:56.900 --rc geninfo_unexecuted_blocks=1 00:35:56.900 00:35:56.900 ' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.900 --rc genhtml_branch_coverage=1 00:35:56.900 --rc genhtml_function_coverage=1 00:35:56.900 --rc genhtml_legend=1 00:35:56.900 --rc geninfo_all_blocks=1 00:35:56.900 --rc geninfo_unexecuted_blocks=1 00:35:56.900 00:35:56.900 ' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:56.900 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:56.900 --rc genhtml_branch_coverage=1 00:35:56.900 --rc genhtml_function_coverage=1 00:35:56.900 --rc genhtml_legend=1 00:35:56.900 --rc geninfo_all_blocks=1 00:35:56.900 --rc geninfo_unexecuted_blocks=1 00:35:56.900 00:35:56.900 ' 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:56.900 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:35:56.901 12:19:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:05.047 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:05.047 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:05.047 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:05.047 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:05.048 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:05.048 12:19:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:05.048 10.0.0.1 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:05.048 10.0.0.2 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:05.048 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:05.049 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:05.049 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.640 ms 00:36:05.049 00:36:05.049 --- 10.0.0.1 ping statistics --- 00:36:05.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.049 rtt min/avg/max/mdev = 0.640/0.640/0.640/0.000 ms 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:05.049 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:05.049 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:36:05.049 00:36:05.049 --- 10.0.0.2 ping statistics --- 00:36:05.049 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:05.049 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:05.049 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:05.050 ' 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=1577437 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 1577437 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1577437 ']' 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:05.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.050 12:19:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.050 [2024-12-05 12:19:29.410968] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:05.050 [2024-12-05 12:19:29.412108] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:05.050 [2024-12-05 12:19:29.412161] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:05.050 [2024-12-05 12:19:29.513630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:36:05.050 [2024-12-05 12:19:29.565601] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:05.050 [2024-12-05 12:19:29.565655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:05.050 [2024-12-05 12:19:29.565663] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:05.050 [2024-12-05 12:19:29.565670] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:05.050 [2024-12-05 12:19:29.565676] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:05.050 [2024-12-05 12:19:29.567345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:05.050 [2024-12-05 12:19:29.567347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.050 [2024-12-05 12:19:29.645710] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:05.050 [2024-12-05 12:19:29.646475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:05.050 [2024-12-05 12:19:29.646654] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.311 [2024-12-05 12:19:30.292478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.311 [2024-12-05 12:19:30.324867] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.311 NULL1 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.311 Delay0 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:05.311 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:05.573 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:05.573 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1577575 00:36:05.573 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:36:05.573 12:19:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:05.573 [2024-12-05 12:19:30.458143] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:07.488 12:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:07.488 12:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.488 12:19:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 [2024-12-05 12:19:32.560004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b2c0 is same with the state(6) to be set 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 starting I/O failed: -6 00:36:07.750 [2024-12-05 12:19:32.561392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9840000c40 is same with the state(6) to be set 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.750 Write completed with error (sct=0, sc=8) 00:36:07.750 Read completed with error (sct=0, sc=8) 00:36:07.751 Write completed with error (sct=0, sc=8) 00:36:07.751 Read completed with error (sct=0, sc=8) 00:36:07.751 Write completed with error (sct=0, sc=8) 00:36:07.751 Read completed with error (sct=0, sc=8) 00:36:07.751 Write completed with error (sct=0, sc=8) 00:36:07.751 Read completed with error (sct=0, sc=8) 00:36:07.751 Write completed with error (sct=0, sc=8) 00:36:07.751 Read completed with error (sct=0, sc=8) 00:36:07.751 Read completed with error (sct=0, sc=8) 00:36:08.694 [2024-12-05 12:19:33.518228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149c9b0 is same with the state(6) to be set 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 [2024-12-05 12:19:33.560133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149bb40 is same with the state(6) to be set 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 [2024-12-05 12:19:33.560229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f984000d020 is same with the state(6) to be set 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Read completed with error (sct=0, sc=8) 00:36:08.694 Write completed with error (sct=0, sc=8) 00:36:08.695 [2024-12-05 12:19:33.560305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f984000d7c0 is same with the state(6) to be set 00:36:08.695 Write completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Write completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Write completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Write completed with error (sct=0, sc=8) 00:36:08.695 Write completed with error (sct=0, sc=8) 00:36:08.695 Write completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 Read completed with error (sct=0, sc=8) 00:36:08.695 [2024-12-05 12:19:33.563563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149b780 is same with the state(6) to be set 00:36:08.695 Initializing NVMe Controllers 00:36:08.695 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:08.695 Controller IO queue size 128, less than required. 00:36:08.695 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:08.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:08.695 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:08.695 Initialization complete. Launching workers. 00:36:08.695 ======================================================== 00:36:08.695 Latency(us) 00:36:08.695 Device Information : IOPS MiB/s Average min max 00:36:08.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.30 0.08 892021.43 396.91 1012241.31 00:36:08.695 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154.42 0.08 957400.95 333.78 1998070.18 00:36:08.695 ======================================================== 00:36:08.695 Total : 325.72 0.16 923016.90 333.78 1998070.18 00:36:08.695 00:36:08.695 [2024-12-05 12:19:33.563943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c9b0 (9): Bad file descriptor 00:36:08.695 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:36:08.695 12:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:08.695 12:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:36:08.695 12:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1577575 00:36:08.695 12:19:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1577575 00:36:09.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1577575) - No such process 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1577575 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1577575 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:36:09.268 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1577575 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:09.269 [2024-12-05 12:19:34.096739] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1578245 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:36:09.269 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:09.269 [2024-12-05 12:19:34.202372] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:36:09.845 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:09.845 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:09.845 12:19:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:10.197 12:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:10.197 12:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:10.197 12:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:10.817 12:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:10.817 12:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:10.817 12:19:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:11.103 12:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:11.103 12:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:11.103 12:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:11.674 12:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:11.674 12:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:11.674 12:19:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:12.243 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:12.243 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:12.243 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:36:12.503 Initializing NVMe Controllers 00:36:12.503 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:12.503 Controller IO queue size 128, less than required. 00:36:12.503 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:12.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:36:12.503 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:36:12.503 Initialization complete. Launching workers. 00:36:12.503 ======================================================== 00:36:12.503 Latency(us) 00:36:12.503 Device Information : IOPS MiB/s Average min max 00:36:12.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003036.01 1000126.48 1009300.01 00:36:12.503 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004790.85 1000236.67 1041831.24 00:36:12.503 ======================================================== 00:36:12.503 Total : 256.00 0.12 1003913.43 1000126.48 1041831.24 00:36:12.503 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1578245 00:36:12.764 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1578245) - No such process 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1578245 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:12.764 rmmod nvme_tcp 00:36:12.764 rmmod nvme_fabrics 00:36:12.764 rmmod nvme_keyring 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 1577437 ']' 00:36:12.764 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 1577437 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1577437 ']' 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1577437 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1577437 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1577437' 00:36:12.765 killing process with pid 1577437 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1577437 00:36:12.765 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1577437 00:36:13.025 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:13.025 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:36:13.025 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:36:13.025 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:13.025 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:13.026 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:13.026 12:19:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:36:14.936 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:36:14.937 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:36:14.937 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:14.937 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:36:14.937 00:36:14.937 real 0m18.421s 00:36:14.937 user 0m26.727s 00:36:14.937 sys 0m7.307s 00:36:14.937 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.937 12:19:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:36:14.937 ************************************ 00:36:14.937 END TEST nvmf_delete_subsystem 00:36:14.937 ************************************ 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:15.197 ************************************ 00:36:15.197 START TEST nvmf_host_management 00:36:15.197 ************************************ 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:36:15.197 * Looking for test storage... 00:36:15.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.197 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.198 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.459 --rc genhtml_branch_coverage=1 00:36:15.459 --rc genhtml_function_coverage=1 00:36:15.459 --rc genhtml_legend=1 00:36:15.459 --rc geninfo_all_blocks=1 00:36:15.459 --rc geninfo_unexecuted_blocks=1 00:36:15.459 00:36:15.459 ' 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.459 --rc genhtml_branch_coverage=1 00:36:15.459 --rc genhtml_function_coverage=1 00:36:15.459 --rc genhtml_legend=1 00:36:15.459 --rc geninfo_all_blocks=1 00:36:15.459 --rc geninfo_unexecuted_blocks=1 00:36:15.459 00:36:15.459 ' 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.459 --rc genhtml_branch_coverage=1 00:36:15.459 --rc genhtml_function_coverage=1 00:36:15.459 --rc genhtml_legend=1 00:36:15.459 --rc geninfo_all_blocks=1 00:36:15.459 --rc geninfo_unexecuted_blocks=1 00:36:15.459 00:36:15.459 ' 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.459 --rc genhtml_branch_coverage=1 00:36:15.459 --rc genhtml_function_coverage=1 00:36:15.459 --rc genhtml_legend=1 00:36:15.459 --rc geninfo_all_blocks=1 00:36:15.459 --rc geninfo_unexecuted_blocks=1 00:36:15.459 00:36:15.459 ' 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:36:15.459 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:36:15.460 12:19:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:36:23.601 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:23.602 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:23.602 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:23.602 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:23.602 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:23.602 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:23.603 10.0.0.1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:23.603 10.0.0.2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:23.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:23.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.619 ms 00:36:23.603 00:36:23.603 --- 10.0.0.1 ping statistics --- 00:36:23.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.603 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:23.603 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:23.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:23.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:36:23.604 00:36:23.604 --- 10.0.0.2 ping statistics --- 00:36:23.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:23.604 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:23.604 ' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=1583270 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 1583270 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1583270 ']' 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.604 12:19:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.604 [2024-12-05 12:19:47.921656] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:23.604 [2024-12-05 12:19:47.922780] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:23.604 [2024-12-05 12:19:47.922829] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.605 [2024-12-05 12:19:48.023998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:23.605 [2024-12-05 12:19:48.077072] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:23.605 [2024-12-05 12:19:48.077121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:23.605 [2024-12-05 12:19:48.077131] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:23.605 [2024-12-05 12:19:48.077138] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:23.605 [2024-12-05 12:19:48.077144] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:23.605 [2024-12-05 12:19:48.079570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:23.605 [2024-12-05 12:19:48.079741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:23.605 [2024-12-05 12:19:48.079904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:23.605 [2024-12-05 12:19:48.079904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:23.605 [2024-12-05 12:19:48.158379] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:23.605 [2024-12-05 12:19:48.159411] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:23.605 [2024-12-05 12:19:48.159787] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:36:23.605 [2024-12-05 12:19:48.160282] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:23.605 [2024-12-05 12:19:48.160331] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.866 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 [2024-12-05 12:19:48.780893] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:23.867 Malloc0 00:36:23.867 [2024-12-05 12:19:48.893233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:23.867 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1583399 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1583399 /var/tmp/bdevperf.sock 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1583399 ']' 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:36:24.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:24.128 { 00:36:24.128 "params": { 00:36:24.128 "name": "Nvme$subsystem", 00:36:24.128 "trtype": "$TEST_TRANSPORT", 00:36:24.128 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.128 "adrfam": "ipv4", 00:36:24.128 "trsvcid": "$NVMF_PORT", 00:36:24.128 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.128 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.128 "hdgst": ${hdgst:-false}, 00:36:24.128 "ddgst": ${ddgst:-false} 00:36:24.128 }, 00:36:24.128 "method": "bdev_nvme_attach_controller" 00:36:24.128 } 00:36:24.128 EOF 00:36:24.128 )") 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:36:24.128 12:19:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:24.128 "params": { 00:36:24.128 "name": "Nvme0", 00:36:24.128 "trtype": "tcp", 00:36:24.128 "traddr": "10.0.0.2", 00:36:24.128 "adrfam": "ipv4", 00:36:24.128 "trsvcid": "4420", 00:36:24.128 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.128 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.128 "hdgst": false, 00:36:24.128 "ddgst": false 00:36:24.128 }, 00:36:24.128 "method": "bdev_nvme_attach_controller" 00:36:24.128 }' 00:36:24.128 [2024-12-05 12:19:49.002071] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:24.128 [2024-12-05 12:19:49.002144] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583399 ] 00:36:24.128 [2024-12-05 12:19:49.095817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.128 [2024-12-05 12:19:49.149224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.389 Running I/O for 10 seconds... 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=899 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 899 -ge 100 ']' 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.962 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:24.962 [2024-12-05 12:19:49.908576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.962 [2024-12-05 12:19:49.908638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.962 [2024-12-05 12:19:49.908647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908730] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908809] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908830] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908837] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908864] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908880] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908903] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908910] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 [2024-12-05 12:19:49.908917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6720e0 is same with the state(6) to be set 00:36:24.963 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.963 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:36:24.963 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.963 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:24.963 [2024-12-05 12:19:49.915924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.963 [2024-12-05 12:19:49.915980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.915991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.963 [2024-12-05 12:19:49.915999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.916009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.963 [2024-12-05 12:19:49.916016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.916025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:24.963 [2024-12-05 12:19:49.916032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.916040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x168a010 is same with the state(6) to be set 00:36:24.963 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.963 12:19:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:36:24.963 [2024-12-05 12:19:49.928337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x168a010 (9): Bad file descriptor 00:36:24.963 [2024-12-05 12:19:49.928442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.963 [2024-12-05 12:19:49.928722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.963 [2024-12-05 12:19:49.928735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:2816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:2944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.928983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.928993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:3712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:6016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.964 [2024-12-05 12:19:49.929412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.964 [2024-12-05 12:19:49.929419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.929573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:24.965 [2024-12-05 12:19:49.929580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:24.965 [2024-12-05 12:19:49.930846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:36:24.965 task offset: 130688 on job bdev=Nvme0n1 fails 00:36:24.965 00:36:24.965 Latency(us) 00:36:24.965 [2024-12-05T11:19:50.014Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:24.965 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:24.965 Job: Nvme0n1 ended in about 0.61 seconds with error 00:36:24.965 Verification LBA range: start 0x0 length 0x400 00:36:24.965 Nvme0n1 : 0.61 1662.47 103.90 104.21 0.00 35349.32 1665.71 33204.91 00:36:24.965 [2024-12-05T11:19:50.014Z] =================================================================================================================== 00:36:24.965 [2024-12-05T11:19:50.014Z] Total : 1662.47 103.90 104.21 0.00 35349.32 1665.71 33204.91 00:36:24.965 [2024-12-05 12:19:49.933055] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:36:24.965 [2024-12-05 12:19:49.986234] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1583399 00:36:25.908 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1583399) - No such process 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:25.908 { 00:36:25.908 "params": { 00:36:25.908 "name": "Nvme$subsystem", 00:36:25.908 "trtype": "$TEST_TRANSPORT", 00:36:25.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:25.908 "adrfam": "ipv4", 00:36:25.908 "trsvcid": "$NVMF_PORT", 00:36:25.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:25.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:25.908 "hdgst": ${hdgst:-false}, 00:36:25.908 "ddgst": ${ddgst:-false} 00:36:25.908 }, 00:36:25.908 "method": "bdev_nvme_attach_controller" 00:36:25.908 } 00:36:25.908 EOF 00:36:25.908 )") 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:36:25.908 12:19:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:25.908 "params": { 00:36:25.908 "name": "Nvme0", 00:36:25.908 "trtype": "tcp", 00:36:25.908 "traddr": "10.0.0.2", 00:36:25.908 "adrfam": "ipv4", 00:36:25.908 "trsvcid": "4420", 00:36:25.908 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:25.908 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:25.908 "hdgst": false, 00:36:25.908 "ddgst": false 00:36:25.908 }, 00:36:25.908 "method": "bdev_nvme_attach_controller" 00:36:25.908 }' 00:36:26.168 [2024-12-05 12:19:50.987912] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:26.168 [2024-12-05 12:19:50.987987] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1583830 ] 00:36:26.168 [2024-12-05 12:19:51.080282] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:26.168 [2024-12-05 12:19:51.133884] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:26.429 Running I/O for 1 seconds... 00:36:27.370 1934.00 IOPS, 120.88 MiB/s 00:36:27.370 Latency(us) 00:36:27.370 [2024-12-05T11:19:52.419Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:27.370 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:36:27.370 Verification LBA range: start 0x0 length 0x400 00:36:27.370 Nvme0n1 : 1.06 1892.29 118.27 0.00 0.00 31885.38 4341.76 45001.39 00:36:27.370 [2024-12-05T11:19:52.419Z] =================================================================================================================== 00:36:27.370 [2024-12-05T11:19:52.419Z] Total : 1892.29 118.27 0.00 0.00 31885.38 4341.76 45001.39 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:27.631 rmmod nvme_tcp 00:36:27.631 rmmod nvme_fabrics 00:36:27.631 rmmod nvme_keyring 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 1583270 ']' 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 1583270 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1583270 ']' 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1583270 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1583270 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1583270' 00:36:27.631 killing process with pid 1583270 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1583270 00:36:27.631 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1583270 00:36:27.891 [2024-12-05 12:19:52.722651] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:27.891 12:19:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:36:29.804 00:36:29.804 real 0m14.784s 00:36:29.804 user 0m19.535s 00:36:29.804 sys 0m7.510s 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:29.804 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:36:29.804 ************************************ 00:36:29.804 END TEST nvmf_host_management 00:36:29.804 ************************************ 00:36:30.065 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:30.065 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:30.065 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:30.065 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:30.065 ************************************ 00:36:30.065 START TEST nvmf_lvol 00:36:30.065 ************************************ 00:36:30.065 12:19:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:36:30.065 * Looking for test storage... 00:36:30.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:36:30.065 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:30.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.325 --rc genhtml_branch_coverage=1 00:36:30.325 --rc genhtml_function_coverage=1 00:36:30.325 --rc genhtml_legend=1 00:36:30.325 --rc geninfo_all_blocks=1 00:36:30.325 --rc geninfo_unexecuted_blocks=1 00:36:30.325 00:36:30.325 ' 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:30.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.325 --rc genhtml_branch_coverage=1 00:36:30.325 --rc genhtml_function_coverage=1 00:36:30.325 --rc genhtml_legend=1 00:36:30.325 --rc geninfo_all_blocks=1 00:36:30.325 --rc geninfo_unexecuted_blocks=1 00:36:30.325 00:36:30.325 ' 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:30.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.325 --rc genhtml_branch_coverage=1 00:36:30.325 --rc genhtml_function_coverage=1 00:36:30.325 --rc genhtml_legend=1 00:36:30.325 --rc geninfo_all_blocks=1 00:36:30.325 --rc geninfo_unexecuted_blocks=1 00:36:30.325 00:36:30.325 ' 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:30.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:30.325 --rc genhtml_branch_coverage=1 00:36:30.325 --rc genhtml_function_coverage=1 00:36:30.325 --rc genhtml_legend=1 00:36:30.325 --rc geninfo_all_blocks=1 00:36:30.325 --rc geninfo_unexecuted_blocks=1 00:36:30.325 00:36:30.325 ' 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:30.325 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:36:30.326 12:19:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:38.462 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:36:38.463 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:36:38.463 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:36:38.463 Found net devices under 0000:4b:00.0: cvl_0_0 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:36:38.463 Found net devices under 0000:4b:00.1: cvl_0_1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:38.463 10.0.0.1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:38.463 10.0.0.2 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.463 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:38.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:38.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.715 ms 00:36:38.464 00:36:38.464 --- 10.0.0.1 ping statistics --- 00:36:38.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.464 rtt min/avg/max/mdev = 0.715/0.715/0.715/0.000 ms 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:38.464 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:38.464 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.329 ms 00:36:38.464 00:36:38.464 --- 10.0.0.2 ping statistics --- 00:36:38.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:38.464 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:38.464 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:36:38.465 ' 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=1588355 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 1588355 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1588355 ']' 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:38.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:38.465 12:20:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 [2024-12-05 12:20:02.813478] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:38.465 [2024-12-05 12:20:02.814597] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:36:38.465 [2024-12-05 12:20:02.814649] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:38.465 [2024-12-05 12:20:02.915600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:38.465 [2024-12-05 12:20:02.967048] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:38.465 [2024-12-05 12:20:02.967102] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:38.465 [2024-12-05 12:20:02.967111] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:38.465 [2024-12-05 12:20:02.967118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:38.465 [2024-12-05 12:20:02.967126] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:38.465 [2024-12-05 12:20:02.969233] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.465 [2024-12-05 12:20:02.969392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.465 [2024-12-05 12:20:02.969393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:38.465 [2024-12-05 12:20:03.047008] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:38.465 [2024-12-05 12:20:03.048098] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:38.465 [2024-12-05 12:20:03.048515] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:38.465 [2024-12-05 12:20:03.048635] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:38.725 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:38.987 [2024-12-05 12:20:03.850295] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.987 12:20:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:39.249 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:36:39.249 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:36:39.510 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:36:39.510 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:36:39.510 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:36:39.771 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a646f9ff-b944-4315-a3ee-de998499ebc9 00:36:39.771 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a646f9ff-b944-4315-a3ee-de998499ebc9 lvol 20 00:36:40.031 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=de815e95-b0f6-4bfc-b1a0-0f1c8910ee52 00:36:40.031 12:20:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:40.301 12:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 de815e95-b0f6-4bfc-b1a0-0f1c8910ee52 00:36:40.301 12:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:40.569 [2024-12-05 12:20:05.434233] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:40.569 12:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:40.830 12:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:36:40.830 12:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1588924 00:36:40.830 12:20:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:36:41.770 12:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot de815e95-b0f6-4bfc-b1a0-0f1c8910ee52 MY_SNAPSHOT 00:36:42.030 12:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=47b2f3de-4cdc-4fe2-8136-e076ba35fd67 00:36:42.030 12:20:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize de815e95-b0f6-4bfc-b1a0-0f1c8910ee52 30 00:36:42.291 12:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 47b2f3de-4cdc-4fe2-8136-e076ba35fd67 MY_CLONE 00:36:42.551 12:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6ca022bf-8c9e-4722-bdce-821875d077ea 00:36:42.551 12:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6ca022bf-8c9e-4722-bdce-821875d077ea 00:36:42.812 12:20:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1588924 00:36:50.948 Initializing NVMe Controllers 00:36:50.948 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:50.948 Controller IO queue size 128, less than required. 00:36:50.948 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:36:50.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:36:50.948 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:36:50.948 Initialization complete. Launching workers. 00:36:50.948 ======================================================== 00:36:50.948 Latency(us) 00:36:50.948 Device Information : IOPS MiB/s Average min max 00:36:50.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 14958.60 58.43 8559.64 1807.61 57907.70 00:36:50.948 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 15461.40 60.40 8279.58 493.67 111797.27 00:36:50.948 ======================================================== 00:36:50.948 Total : 30420.00 118.83 8417.30 493.67 111797.27 00:36:50.948 00:36:50.948 12:20:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:51.209 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete de815e95-b0f6-4bfc-b1a0-0f1c8910ee52 00:36:51.209 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a646f9ff-b944-4315-a3ee-de998499ebc9 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:51.469 rmmod nvme_tcp 00:36:51.469 rmmod nvme_fabrics 00:36:51.469 rmmod nvme_keyring 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 1588355 ']' 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 1588355 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1588355 ']' 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1588355 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1588355 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1588355' 00:36:51.469 killing process with pid 1588355 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1588355 00:36:51.469 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1588355 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:51.729 12:20:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:53.640 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:53.901 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:36:53.901 00:36:53.901 real 0m23.789s 00:36:53.901 user 0m55.328s 00:36:53.901 sys 0m10.655s 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:36:53.902 ************************************ 00:36:53.902 END TEST nvmf_lvol 00:36:53.902 ************************************ 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:53.902 ************************************ 00:36:53.902 START TEST nvmf_lvs_grow 00:36:53.902 ************************************ 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:36:53.902 * Looking for test storage... 00:36:53.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:36:53.902 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.164 --rc genhtml_branch_coverage=1 00:36:54.164 --rc genhtml_function_coverage=1 00:36:54.164 --rc genhtml_legend=1 00:36:54.164 --rc geninfo_all_blocks=1 00:36:54.164 --rc geninfo_unexecuted_blocks=1 00:36:54.164 00:36:54.164 ' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.164 --rc genhtml_branch_coverage=1 00:36:54.164 --rc genhtml_function_coverage=1 00:36:54.164 --rc genhtml_legend=1 00:36:54.164 --rc geninfo_all_blocks=1 00:36:54.164 --rc geninfo_unexecuted_blocks=1 00:36:54.164 00:36:54.164 ' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.164 --rc genhtml_branch_coverage=1 00:36:54.164 --rc genhtml_function_coverage=1 00:36:54.164 --rc genhtml_legend=1 00:36:54.164 --rc geninfo_all_blocks=1 00:36:54.164 --rc geninfo_unexecuted_blocks=1 00:36:54.164 00:36:54.164 ' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:54.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:54.164 --rc genhtml_branch_coverage=1 00:36:54.164 --rc genhtml_function_coverage=1 00:36:54.164 --rc genhtml_legend=1 00:36:54.164 --rc geninfo_all_blocks=1 00:36:54.164 --rc geninfo_unexecuted_blocks=1 00:36:54.164 00:36:54.164 ' 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:54.164 12:20:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:54.164 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:36:54.165 12:20:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:02.310 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:02.311 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:02.311 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:02.311 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:02.311 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:02.311 10.0.0.1 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:02.311 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:02.312 10.0.0.2 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:02.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:02.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.660 ms 00:37:02.312 00:37:02.312 --- 10.0.0.1 ping statistics --- 00:37:02.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.312 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:02.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:02.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:37:02.312 00:37:02.312 --- 10.0.0.2 ping statistics --- 00:37:02.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:02.312 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:02.312 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:37:02.313 ' 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=1595106 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 1595106 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1595106 ']' 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:02.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:02.313 12:20:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.313 [2024-12-05 12:20:26.697334] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:02.313 [2024-12-05 12:20:26.698491] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:02.313 [2024-12-05 12:20:26.698541] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:02.313 [2024-12-05 12:20:26.798142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:02.313 [2024-12-05 12:20:26.849136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:02.313 [2024-12-05 12:20:26.849184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:02.313 [2024-12-05 12:20:26.849193] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:02.313 [2024-12-05 12:20:26.849199] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:02.313 [2024-12-05 12:20:26.849205] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:02.313 [2024-12-05 12:20:26.849994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:02.313 [2024-12-05 12:20:26.927610] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:02.313 [2024-12-05 12:20:26.927886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:02.574 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:02.835 [2024-12-05 12:20:27.718904] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:02.835 ************************************ 00:37:02.835 START TEST lvs_grow_clean 00:37:02.835 ************************************ 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:02.835 12:20:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:03.095 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:03.095 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:03.356 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=f2b8b00c-2912-407e-847b-790f19109156 00:37:03.356 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:03.356 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:03.617 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:03.617 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:03.617 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f2b8b00c-2912-407e-847b-790f19109156 lvol 150 00:37:03.617 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=14184dc1-5fad-4293-bd47-39a95934cb23 00:37:03.617 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:03.617 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:03.877 [2024-12-05 12:20:28.746596] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:03.877 [2024-12-05 12:20:28.746761] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:03.877 true 00:37:03.877 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:03.877 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:04.138 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:04.138 12:20:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:04.138 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 14184dc1-5fad-4293-bd47-39a95934cb23 00:37:04.398 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:04.659 [2024-12-05 12:20:29.515224] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1595812 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1595812 /var/tmp/bdevperf.sock 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1595812 ']' 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:04.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:04.659 12:20:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:04.919 [2024-12-05 12:20:29.762519] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:04.919 [2024-12-05 12:20:29.762589] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1595812 ] 00:37:04.919 [2024-12-05 12:20:29.855371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:04.919 [2024-12-05 12:20:29.906890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:05.861 12:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:05.861 12:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:37:05.861 12:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:05.861 Nvme0n1 00:37:05.861 12:20:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:06.123 [ 00:37:06.123 { 00:37:06.123 "name": "Nvme0n1", 00:37:06.123 "aliases": [ 00:37:06.123 "14184dc1-5fad-4293-bd47-39a95934cb23" 00:37:06.123 ], 00:37:06.123 "product_name": "NVMe disk", 00:37:06.123 "block_size": 4096, 00:37:06.123 "num_blocks": 38912, 00:37:06.123 "uuid": "14184dc1-5fad-4293-bd47-39a95934cb23", 00:37:06.123 "numa_id": 0, 00:37:06.123 "assigned_rate_limits": { 00:37:06.123 "rw_ios_per_sec": 0, 00:37:06.123 "rw_mbytes_per_sec": 0, 00:37:06.123 "r_mbytes_per_sec": 0, 00:37:06.123 "w_mbytes_per_sec": 0 00:37:06.123 }, 00:37:06.123 "claimed": false, 00:37:06.123 "zoned": false, 00:37:06.123 "supported_io_types": { 00:37:06.123 "read": true, 00:37:06.123 "write": true, 00:37:06.123 "unmap": true, 00:37:06.123 "flush": true, 00:37:06.123 "reset": true, 00:37:06.123 "nvme_admin": true, 00:37:06.123 "nvme_io": true, 00:37:06.123 "nvme_io_md": false, 00:37:06.123 "write_zeroes": true, 00:37:06.123 "zcopy": false, 00:37:06.123 "get_zone_info": false, 00:37:06.123 "zone_management": false, 00:37:06.123 "zone_append": false, 00:37:06.123 "compare": true, 00:37:06.123 "compare_and_write": true, 00:37:06.123 "abort": true, 00:37:06.123 "seek_hole": false, 00:37:06.123 "seek_data": false, 00:37:06.123 "copy": true, 00:37:06.123 "nvme_iov_md": false 00:37:06.123 }, 00:37:06.123 "memory_domains": [ 00:37:06.123 { 00:37:06.123 "dma_device_id": "system", 00:37:06.123 "dma_device_type": 1 00:37:06.123 } 00:37:06.123 ], 00:37:06.123 "driver_specific": { 00:37:06.123 "nvme": [ 00:37:06.123 { 00:37:06.123 "trid": { 00:37:06.123 "trtype": "TCP", 00:37:06.123 "adrfam": "IPv4", 00:37:06.123 "traddr": "10.0.0.2", 00:37:06.123 "trsvcid": "4420", 00:37:06.123 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:06.123 }, 00:37:06.123 "ctrlr_data": { 00:37:06.123 "cntlid": 1, 00:37:06.123 "vendor_id": "0x8086", 00:37:06.123 "model_number": "SPDK bdev Controller", 00:37:06.123 "serial_number": "SPDK0", 00:37:06.123 "firmware_revision": "25.01", 00:37:06.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.123 "oacs": { 00:37:06.123 "security": 0, 00:37:06.123 "format": 0, 00:37:06.123 "firmware": 0, 00:37:06.123 "ns_manage": 0 00:37:06.123 }, 00:37:06.123 "multi_ctrlr": true, 00:37:06.123 "ana_reporting": false 00:37:06.123 }, 00:37:06.123 "vs": { 00:37:06.123 "nvme_version": "1.3" 00:37:06.123 }, 00:37:06.123 "ns_data": { 00:37:06.123 "id": 1, 00:37:06.123 "can_share": true 00:37:06.123 } 00:37:06.123 } 00:37:06.123 ], 00:37:06.123 "mp_policy": "active_passive" 00:37:06.123 } 00:37:06.123 } 00:37:06.123 ] 00:37:06.123 12:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1596109 00:37:06.123 12:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:06.123 12:20:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:06.123 Running I/O for 10 seconds... 00:37:07.508 Latency(us) 00:37:07.508 [2024-12-05T11:20:32.557Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:07.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:07.508 Nvme0n1 : 1.00 16901.00 66.02 0.00 0.00 0.00 0.00 0.00 00:37:07.508 [2024-12-05T11:20:32.557Z] =================================================================================================================== 00:37:07.508 [2024-12-05T11:20:32.557Z] Total : 16901.00 66.02 0.00 0.00 0.00 0.00 0.00 00:37:07.508 00:37:08.080 12:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f2b8b00c-2912-407e-847b-790f19109156 00:37:08.341 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:08.341 Nvme0n1 : 2.00 17150.00 66.99 0.00 0.00 0.00 0.00 0.00 00:37:08.341 [2024-12-05T11:20:33.390Z] =================================================================================================================== 00:37:08.341 [2024-12-05T11:20:33.390Z] Total : 17150.00 66.99 0.00 0.00 0.00 0.00 0.00 00:37:08.341 00:37:08.341 true 00:37:08.341 12:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:08.341 12:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:08.649 12:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:08.649 12:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:08.649 12:20:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1596109 00:37:09.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:09.317 Nvme0n1 : 3.00 17344.67 67.75 0.00 0.00 0.00 0.00 0.00 00:37:09.317 [2024-12-05T11:20:34.366Z] =================================================================================================================== 00:37:09.317 [2024-12-05T11:20:34.366Z] Total : 17344.67 67.75 0.00 0.00 0.00 0.00 0.00 00:37:09.317 00:37:10.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:10.255 Nvme0n1 : 4.00 18104.25 70.72 0.00 0.00 0.00 0.00 0.00 00:37:10.255 [2024-12-05T11:20:35.304Z] =================================================================================================================== 00:37:10.255 [2024-12-05T11:20:35.304Z] Total : 18104.25 70.72 0.00 0.00 0.00 0.00 0.00 00:37:10.255 00:37:11.193 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:11.193 Nvme0n1 : 5.00 19601.60 76.57 0.00 0.00 0.00 0.00 0.00 00:37:11.193 [2024-12-05T11:20:36.242Z] =================================================================================================================== 00:37:11.193 [2024-12-05T11:20:36.242Z] Total : 19601.60 76.57 0.00 0.00 0.00 0.00 0.00 00:37:11.193 00:37:12.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:12.129 Nvme0n1 : 6.00 20607.83 80.50 0.00 0.00 0.00 0.00 0.00 00:37:12.129 [2024-12-05T11:20:37.178Z] =================================================================================================================== 00:37:12.129 [2024-12-05T11:20:37.178Z] Total : 20607.83 80.50 0.00 0.00 0.00 0.00 0.00 00:37:12.129 00:37:13.507 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:13.507 Nvme0n1 : 7.00 21310.57 83.24 0.00 0.00 0.00 0.00 0.00 00:37:13.507 [2024-12-05T11:20:38.556Z] =================================================================================================================== 00:37:13.507 [2024-12-05T11:20:38.556Z] Total : 21310.57 83.24 0.00 0.00 0.00 0.00 0.00 00:37:13.507 00:37:14.468 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:14.468 Nvme0n1 : 8.00 21853.50 85.37 0.00 0.00 0.00 0.00 0.00 00:37:14.468 [2024-12-05T11:20:39.517Z] =================================================================================================================== 00:37:14.468 [2024-12-05T11:20:39.517Z] Total : 21853.50 85.37 0.00 0.00 0.00 0.00 0.00 00:37:14.468 00:37:15.409 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:15.409 Nvme0n1 : 9.00 22275.78 87.01 0.00 0.00 0.00 0.00 0.00 00:37:15.409 [2024-12-05T11:20:40.458Z] =================================================================================================================== 00:37:15.409 [2024-12-05T11:20:40.458Z] Total : 22275.78 87.01 0.00 0.00 0.00 0.00 0.00 00:37:15.409 00:37:16.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:16.350 Nvme0n1 : 10.00 22610.70 88.32 0.00 0.00 0.00 0.00 0.00 00:37:16.350 [2024-12-05T11:20:41.399Z] =================================================================================================================== 00:37:16.350 [2024-12-05T11:20:41.399Z] Total : 22610.70 88.32 0.00 0.00 0.00 0.00 0.00 00:37:16.350 00:37:16.350 00:37:16.350 Latency(us) 00:37:16.350 [2024-12-05T11:20:41.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.350 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:16.350 Nvme0n1 : 10.00 22608.14 88.31 0.00 0.00 5658.08 2908.16 29054.29 00:37:16.350 [2024-12-05T11:20:41.399Z] =================================================================================================================== 00:37:16.350 [2024-12-05T11:20:41.399Z] Total : 22608.14 88.31 0.00 0.00 5658.08 2908.16 29054.29 00:37:16.350 { 00:37:16.350 "results": [ 00:37:16.350 { 00:37:16.350 "job": "Nvme0n1", 00:37:16.350 "core_mask": "0x2", 00:37:16.350 "workload": "randwrite", 00:37:16.350 "status": "finished", 00:37:16.350 "queue_depth": 128, 00:37:16.350 "io_size": 4096, 00:37:16.350 "runtime": 10.002458, 00:37:16.350 "iops": 22608.14291847064, 00:37:16.350 "mibps": 88.31305827527594, 00:37:16.350 "io_failed": 0, 00:37:16.350 "io_timeout": 0, 00:37:16.350 "avg_latency_us": 5658.08057333976, 00:37:16.350 "min_latency_us": 2908.16, 00:37:16.350 "max_latency_us": 29054.293333333335 00:37:16.350 } 00:37:16.350 ], 00:37:16.350 "core_count": 1 00:37:16.350 } 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1595812 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1595812 ']' 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1595812 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1595812 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1595812' 00:37:16.350 killing process with pid 1595812 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1595812 00:37:16.350 Received shutdown signal, test time was about 10.000000 seconds 00:37:16.350 00:37:16.350 Latency(us) 00:37:16.350 [2024-12-05T11:20:41.399Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:16.350 [2024-12-05T11:20:41.399Z] =================================================================================================================== 00:37:16.350 [2024-12-05T11:20:41.399Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1595812 00:37:16.350 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:16.611 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:16.871 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:16.871 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:16.871 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:16.871 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:37:16.871 12:20:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:17.131 [2024-12-05 12:20:42.046632] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:17.131 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:17.131 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:37:17.131 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:17.132 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:17.392 request: 00:37:17.392 { 00:37:17.392 "uuid": "f2b8b00c-2912-407e-847b-790f19109156", 00:37:17.392 "method": "bdev_lvol_get_lvstores", 00:37:17.392 "req_id": 1 00:37:17.392 } 00:37:17.392 Got JSON-RPC error response 00:37:17.392 response: 00:37:17.392 { 00:37:17.392 "code": -19, 00:37:17.392 "message": "No such device" 00:37:17.392 } 00:37:17.392 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:37:17.392 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:17.392 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:17.392 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:17.392 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:17.652 aio_bdev 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 14184dc1-5fad-4293-bd47-39a95934cb23 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=14184dc1-5fad-4293-bd47-39a95934cb23 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:17.652 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 14184dc1-5fad-4293-bd47-39a95934cb23 -t 2000 00:37:17.913 [ 00:37:17.913 { 00:37:17.913 "name": "14184dc1-5fad-4293-bd47-39a95934cb23", 00:37:17.913 "aliases": [ 00:37:17.913 "lvs/lvol" 00:37:17.913 ], 00:37:17.913 "product_name": "Logical Volume", 00:37:17.913 "block_size": 4096, 00:37:17.913 "num_blocks": 38912, 00:37:17.913 "uuid": "14184dc1-5fad-4293-bd47-39a95934cb23", 00:37:17.913 "assigned_rate_limits": { 00:37:17.913 "rw_ios_per_sec": 0, 00:37:17.913 "rw_mbytes_per_sec": 0, 00:37:17.913 "r_mbytes_per_sec": 0, 00:37:17.913 "w_mbytes_per_sec": 0 00:37:17.913 }, 00:37:17.913 "claimed": false, 00:37:17.913 "zoned": false, 00:37:17.913 "supported_io_types": { 00:37:17.913 "read": true, 00:37:17.913 "write": true, 00:37:17.913 "unmap": true, 00:37:17.913 "flush": false, 00:37:17.913 "reset": true, 00:37:17.913 "nvme_admin": false, 00:37:17.913 "nvme_io": false, 00:37:17.913 "nvme_io_md": false, 00:37:17.913 "write_zeroes": true, 00:37:17.913 "zcopy": false, 00:37:17.913 "get_zone_info": false, 00:37:17.913 "zone_management": false, 00:37:17.913 "zone_append": false, 00:37:17.913 "compare": false, 00:37:17.913 "compare_and_write": false, 00:37:17.913 "abort": false, 00:37:17.913 "seek_hole": true, 00:37:17.913 "seek_data": true, 00:37:17.913 "copy": false, 00:37:17.913 "nvme_iov_md": false 00:37:17.913 }, 00:37:17.913 "driver_specific": { 00:37:17.913 "lvol": { 00:37:17.913 "lvol_store_uuid": "f2b8b00c-2912-407e-847b-790f19109156", 00:37:17.913 "base_bdev": "aio_bdev", 00:37:17.913 "thin_provision": false, 00:37:17.913 "num_allocated_clusters": 38, 00:37:17.913 "snapshot": false, 00:37:17.913 "clone": false, 00:37:17.913 "esnap_clone": false 00:37:17.913 } 00:37:17.913 } 00:37:17.913 } 00:37:17.913 ] 00:37:17.913 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:37:17.913 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:17.913 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:18.174 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:18.174 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f2b8b00c-2912-407e-847b-790f19109156 00:37:18.174 12:20:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:18.174 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:18.174 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 14184dc1-5fad-4293-bd47-39a95934cb23 00:37:18.435 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f2b8b00c-2912-407e-847b-790f19109156 00:37:18.694 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:18.694 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:18.955 00:37:18.955 real 0m15.960s 00:37:18.955 user 0m15.617s 00:37:18.955 sys 0m1.474s 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:37:18.955 ************************************ 00:37:18.955 END TEST lvs_grow_clean 00:37:18.955 ************************************ 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:18.955 ************************************ 00:37:18.955 START TEST lvs_grow_dirty 00:37:18.955 ************************************ 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:18.955 12:20:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:19.216 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:37:19.216 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:37:19.216 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:19.216 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:19.216 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:37:19.476 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:37:19.476 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:37:19.476 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b lvol 150 00:37:19.737 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:19.737 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:19.737 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:37:19.737 [2024-12-05 12:20:44.746577] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:37:19.737 [2024-12-05 12:20:44.746731] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:37:19.737 true 00:37:19.737 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:19.737 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:37:19.997 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:37:19.997 12:20:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:20.257 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:20.517 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:20.517 [2024-12-05 12:20:45.479177] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:20.517 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1598893 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1598893 /var/tmp/bdevperf.sock 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1598893 ']' 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:20.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:20.776 12:20:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:20.776 [2024-12-05 12:20:45.711357] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:20.776 [2024-12-05 12:20:45.711413] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1598893 ] 00:37:20.776 [2024-12-05 12:20:45.796658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:21.035 [2024-12-05 12:20:45.826912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:21.604 12:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:21.604 12:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:21.604 12:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:37:21.863 Nvme0n1 00:37:21.863 12:20:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:37:22.123 [ 00:37:22.123 { 00:37:22.123 "name": "Nvme0n1", 00:37:22.123 "aliases": [ 00:37:22.123 "3db1a9bc-ecca-45da-8d1f-126f94f9ea25" 00:37:22.123 ], 00:37:22.123 "product_name": "NVMe disk", 00:37:22.123 "block_size": 4096, 00:37:22.123 "num_blocks": 38912, 00:37:22.123 "uuid": "3db1a9bc-ecca-45da-8d1f-126f94f9ea25", 00:37:22.123 "numa_id": 0, 00:37:22.123 "assigned_rate_limits": { 00:37:22.123 "rw_ios_per_sec": 0, 00:37:22.123 "rw_mbytes_per_sec": 0, 00:37:22.123 "r_mbytes_per_sec": 0, 00:37:22.123 "w_mbytes_per_sec": 0 00:37:22.123 }, 00:37:22.123 "claimed": false, 00:37:22.123 "zoned": false, 00:37:22.123 "supported_io_types": { 00:37:22.123 "read": true, 00:37:22.123 "write": true, 00:37:22.123 "unmap": true, 00:37:22.123 "flush": true, 00:37:22.123 "reset": true, 00:37:22.123 "nvme_admin": true, 00:37:22.123 "nvme_io": true, 00:37:22.123 "nvme_io_md": false, 00:37:22.123 "write_zeroes": true, 00:37:22.123 "zcopy": false, 00:37:22.123 "get_zone_info": false, 00:37:22.123 "zone_management": false, 00:37:22.123 "zone_append": false, 00:37:22.123 "compare": true, 00:37:22.123 "compare_and_write": true, 00:37:22.123 "abort": true, 00:37:22.123 "seek_hole": false, 00:37:22.123 "seek_data": false, 00:37:22.123 "copy": true, 00:37:22.123 "nvme_iov_md": false 00:37:22.123 }, 00:37:22.123 "memory_domains": [ 00:37:22.123 { 00:37:22.123 "dma_device_id": "system", 00:37:22.123 "dma_device_type": 1 00:37:22.123 } 00:37:22.123 ], 00:37:22.123 "driver_specific": { 00:37:22.123 "nvme": [ 00:37:22.123 { 00:37:22.123 "trid": { 00:37:22.123 "trtype": "TCP", 00:37:22.123 "adrfam": "IPv4", 00:37:22.123 "traddr": "10.0.0.2", 00:37:22.123 "trsvcid": "4420", 00:37:22.123 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:37:22.123 }, 00:37:22.123 "ctrlr_data": { 00:37:22.123 "cntlid": 1, 00:37:22.123 "vendor_id": "0x8086", 00:37:22.123 "model_number": "SPDK bdev Controller", 00:37:22.123 "serial_number": "SPDK0", 00:37:22.123 "firmware_revision": "25.01", 00:37:22.123 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:22.123 "oacs": { 00:37:22.123 "security": 0, 00:37:22.123 "format": 0, 00:37:22.123 "firmware": 0, 00:37:22.123 "ns_manage": 0 00:37:22.123 }, 00:37:22.123 "multi_ctrlr": true, 00:37:22.123 "ana_reporting": false 00:37:22.123 }, 00:37:22.123 "vs": { 00:37:22.123 "nvme_version": "1.3" 00:37:22.123 }, 00:37:22.123 "ns_data": { 00:37:22.123 "id": 1, 00:37:22.123 "can_share": true 00:37:22.123 } 00:37:22.123 } 00:37:22.123 ], 00:37:22.123 "mp_policy": "active_passive" 00:37:22.123 } 00:37:22.123 } 00:37:22.123 ] 00:37:22.123 12:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1599126 00:37:22.123 12:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:37:22.123 12:20:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:37:22.123 Running I/O for 10 seconds... 00:37:23.506 Latency(us) 00:37:23.506 [2024-12-05T11:20:48.555Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:23.506 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:23.506 Nvme0n1 : 1.00 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:37:23.506 [2024-12-05T11:20:48.555Z] =================================================================================================================== 00:37:23.506 [2024-12-05T11:20:48.555Z] Total : 17526.00 68.46 0.00 0.00 0.00 0.00 0.00 00:37:23.506 00:37:24.078 12:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:24.078 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:24.078 Nvme0n1 : 2.00 17797.00 69.52 0.00 0.00 0.00 0.00 0.00 00:37:24.078 [2024-12-05T11:20:49.127Z] =================================================================================================================== 00:37:24.078 [2024-12-05T11:20:49.127Z] Total : 17797.00 69.52 0.00 0.00 0.00 0.00 0.00 00:37:24.078 00:37:24.339 true 00:37:24.339 12:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:24.339 12:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:37:24.339 12:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:37:24.339 12:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:37:24.339 12:20:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1599126 00:37:25.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:25.280 Nvme0n1 : 3.00 17876.00 69.83 0.00 0.00 0.00 0.00 0.00 00:37:25.280 [2024-12-05T11:20:50.329Z] =================================================================================================================== 00:37:25.280 [2024-12-05T11:20:50.329Z] Total : 17876.00 69.83 0.00 0.00 0.00 0.00 0.00 00:37:25.280 00:37:26.222 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:26.222 Nvme0n1 : 4.00 17931.50 70.04 0.00 0.00 0.00 0.00 0.00 00:37:26.222 [2024-12-05T11:20:51.271Z] =================================================================================================================== 00:37:26.222 [2024-12-05T11:20:51.271Z] Total : 17931.50 70.04 0.00 0.00 0.00 0.00 0.00 00:37:26.222 00:37:27.164 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:27.164 Nvme0n1 : 5.00 18117.00 70.77 0.00 0.00 0.00 0.00 0.00 00:37:27.164 [2024-12-05T11:20:52.213Z] =================================================================================================================== 00:37:27.164 [2024-12-05T11:20:52.213Z] Total : 18117.00 70.77 0.00 0.00 0.00 0.00 0.00 00:37:27.164 00:37:28.120 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:28.120 Nvme0n1 : 6.00 19352.00 75.59 0.00 0.00 0.00 0.00 0.00 00:37:28.120 [2024-12-05T11:20:53.169Z] =================================================================================================================== 00:37:28.120 [2024-12-05T11:20:53.169Z] Total : 19352.00 75.59 0.00 0.00 0.00 0.00 0.00 00:37:28.120 00:37:29.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:29.503 Nvme0n1 : 7.00 20252.29 79.11 0.00 0.00 0.00 0.00 0.00 00:37:29.503 [2024-12-05T11:20:54.552Z] =================================================================================================================== 00:37:29.503 [2024-12-05T11:20:54.552Z] Total : 20252.29 79.11 0.00 0.00 0.00 0.00 0.00 00:37:29.503 00:37:30.446 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:30.446 Nvme0n1 : 8.00 20919.62 81.72 0.00 0.00 0.00 0.00 0.00 00:37:30.446 [2024-12-05T11:20:55.495Z] =================================================================================================================== 00:37:30.446 [2024-12-05T11:20:55.495Z] Total : 20919.62 81.72 0.00 0.00 0.00 0.00 0.00 00:37:30.446 00:37:31.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:31.383 Nvme0n1 : 9.00 21445.67 83.77 0.00 0.00 0.00 0.00 0.00 00:37:31.383 [2024-12-05T11:20:56.432Z] =================================================================================================================== 00:37:31.383 [2024-12-05T11:20:56.432Z] Total : 21445.67 83.77 0.00 0.00 0.00 0.00 0.00 00:37:31.383 00:37:32.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.322 Nvme0n1 : 10.00 21866.50 85.42 0.00 0.00 0.00 0.00 0.00 00:37:32.322 [2024-12-05T11:20:57.371Z] =================================================================================================================== 00:37:32.322 [2024-12-05T11:20:57.371Z] Total : 21866.50 85.42 0.00 0.00 0.00 0.00 0.00 00:37:32.322 00:37:32.322 00:37:32.322 Latency(us) 00:37:32.322 [2024-12-05T11:20:57.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:37:32.322 Nvme0n1 : 10.00 21865.12 85.41 0.00 0.00 5850.86 3058.35 31020.37 00:37:32.322 [2024-12-05T11:20:57.371Z] =================================================================================================================== 00:37:32.322 [2024-12-05T11:20:57.371Z] Total : 21865.12 85.41 0.00 0.00 5850.86 3058.35 31020.37 00:37:32.323 { 00:37:32.323 "results": [ 00:37:32.323 { 00:37:32.323 "job": "Nvme0n1", 00:37:32.323 "core_mask": "0x2", 00:37:32.323 "workload": "randwrite", 00:37:32.323 "status": "finished", 00:37:32.323 "queue_depth": 128, 00:37:32.323 "io_size": 4096, 00:37:32.323 "runtime": 10.003604, 00:37:32.323 "iops": 21865.11981082018, 00:37:32.323 "mibps": 85.41062426101632, 00:37:32.323 "io_failed": 0, 00:37:32.323 "io_timeout": 0, 00:37:32.323 "avg_latency_us": 5850.861316630853, 00:37:32.323 "min_latency_us": 3058.346666666667, 00:37:32.323 "max_latency_us": 31020.373333333333 00:37:32.323 } 00:37:32.323 ], 00:37:32.323 "core_count": 1 00:37:32.323 } 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1598893 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1598893 ']' 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1598893 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1598893 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1598893' 00:37:32.323 killing process with pid 1598893 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1598893 00:37:32.323 Received shutdown signal, test time was about 10.000000 seconds 00:37:32.323 00:37:32.323 Latency(us) 00:37:32.323 [2024-12-05T11:20:57.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:32.323 [2024-12-05T11:20:57.372Z] =================================================================================================================== 00:37:32.323 [2024-12-05T11:20:57.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1598893 00:37:32.323 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:32.583 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:32.843 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:32.843 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:37:32.843 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:37:32.843 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:37:32.843 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1595106 00:37:32.843 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1595106 00:37:33.103 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1595106 Killed "${NVMF_APP[@]}" "$@" 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=1601237 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 1601237 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1601237 ']' 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:33.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:33.103 12:20:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:33.103 [2024-12-05 12:20:57.967874] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:33.103 [2024-12-05 12:20:57.968874] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:33.103 [2024-12-05 12:20:57.968920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:33.103 [2024-12-05 12:20:58.061001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:33.103 [2024-12-05 12:20:58.092603] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:33.103 [2024-12-05 12:20:58.092629] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:33.103 [2024-12-05 12:20:58.092635] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:33.103 [2024-12-05 12:20:58.092640] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:33.103 [2024-12-05 12:20:58.092644] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:33.103 [2024-12-05 12:20:58.093089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:33.103 [2024-12-05 12:20:58.145055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:33.103 [2024-12-05 12:20:58.145231] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:34.044 [2024-12-05 12:20:58.967431] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:37:34.044 [2024-12-05 12:20:58.967696] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:37:34.044 [2024-12-05 12:20:58.967789] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:34.044 12:20:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:34.305 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3db1a9bc-ecca-45da-8d1f-126f94f9ea25 -t 2000 00:37:34.305 [ 00:37:34.305 { 00:37:34.305 "name": "3db1a9bc-ecca-45da-8d1f-126f94f9ea25", 00:37:34.305 "aliases": [ 00:37:34.305 "lvs/lvol" 00:37:34.305 ], 00:37:34.305 "product_name": "Logical Volume", 00:37:34.305 "block_size": 4096, 00:37:34.305 "num_blocks": 38912, 00:37:34.305 "uuid": "3db1a9bc-ecca-45da-8d1f-126f94f9ea25", 00:37:34.305 "assigned_rate_limits": { 00:37:34.305 "rw_ios_per_sec": 0, 00:37:34.305 "rw_mbytes_per_sec": 0, 00:37:34.305 "r_mbytes_per_sec": 0, 00:37:34.305 "w_mbytes_per_sec": 0 00:37:34.305 }, 00:37:34.305 "claimed": false, 00:37:34.305 "zoned": false, 00:37:34.305 "supported_io_types": { 00:37:34.305 "read": true, 00:37:34.305 "write": true, 00:37:34.305 "unmap": true, 00:37:34.305 "flush": false, 00:37:34.305 "reset": true, 00:37:34.305 "nvme_admin": false, 00:37:34.305 "nvme_io": false, 00:37:34.305 "nvme_io_md": false, 00:37:34.305 "write_zeroes": true, 00:37:34.305 "zcopy": false, 00:37:34.305 "get_zone_info": false, 00:37:34.305 "zone_management": false, 00:37:34.305 "zone_append": false, 00:37:34.305 "compare": false, 00:37:34.305 "compare_and_write": false, 00:37:34.305 "abort": false, 00:37:34.305 "seek_hole": true, 00:37:34.305 "seek_data": true, 00:37:34.305 "copy": false, 00:37:34.305 "nvme_iov_md": false 00:37:34.305 }, 00:37:34.305 "driver_specific": { 00:37:34.305 "lvol": { 00:37:34.305 "lvol_store_uuid": "3b8da7ee-a28d-4128-9edd-cecce6c6fe1b", 00:37:34.305 "base_bdev": "aio_bdev", 00:37:34.305 "thin_provision": false, 00:37:34.305 "num_allocated_clusters": 38, 00:37:34.305 "snapshot": false, 00:37:34.305 "clone": false, 00:37:34.305 "esnap_clone": false 00:37:34.305 } 00:37:34.305 } 00:37:34.305 } 00:37:34.305 ] 00:37:34.305 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:34.305 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:34.305 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:37:34.566 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:37:34.566 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:34.566 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:37:34.827 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:37:34.827 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:35.088 [2024-12-05 12:20:59.881637] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:37:35.088 12:20:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:35.088 request: 00:37:35.088 { 00:37:35.088 "uuid": "3b8da7ee-a28d-4128-9edd-cecce6c6fe1b", 00:37:35.088 "method": "bdev_lvol_get_lvstores", 00:37:35.088 "req_id": 1 00:37:35.088 } 00:37:35.088 Got JSON-RPC error response 00:37:35.088 response: 00:37:35.088 { 00:37:35.088 "code": -19, 00:37:35.088 "message": "No such device" 00:37:35.088 } 00:37:35.088 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:37:35.088 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:35.088 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:35.088 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:35.088 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:37:35.349 aio_bdev 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:37:35.349 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:37:35.609 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3db1a9bc-ecca-45da-8d1f-126f94f9ea25 -t 2000 00:37:35.609 [ 00:37:35.609 { 00:37:35.609 "name": "3db1a9bc-ecca-45da-8d1f-126f94f9ea25", 00:37:35.609 "aliases": [ 00:37:35.609 "lvs/lvol" 00:37:35.609 ], 00:37:35.609 "product_name": "Logical Volume", 00:37:35.609 "block_size": 4096, 00:37:35.609 "num_blocks": 38912, 00:37:35.609 "uuid": "3db1a9bc-ecca-45da-8d1f-126f94f9ea25", 00:37:35.609 "assigned_rate_limits": { 00:37:35.609 "rw_ios_per_sec": 0, 00:37:35.609 "rw_mbytes_per_sec": 0, 00:37:35.609 "r_mbytes_per_sec": 0, 00:37:35.609 "w_mbytes_per_sec": 0 00:37:35.609 }, 00:37:35.609 "claimed": false, 00:37:35.609 "zoned": false, 00:37:35.609 "supported_io_types": { 00:37:35.609 "read": true, 00:37:35.609 "write": true, 00:37:35.609 "unmap": true, 00:37:35.609 "flush": false, 00:37:35.609 "reset": true, 00:37:35.609 "nvme_admin": false, 00:37:35.609 "nvme_io": false, 00:37:35.609 "nvme_io_md": false, 00:37:35.609 "write_zeroes": true, 00:37:35.609 "zcopy": false, 00:37:35.609 "get_zone_info": false, 00:37:35.609 "zone_management": false, 00:37:35.609 "zone_append": false, 00:37:35.609 "compare": false, 00:37:35.609 "compare_and_write": false, 00:37:35.609 "abort": false, 00:37:35.609 "seek_hole": true, 00:37:35.609 "seek_data": true, 00:37:35.609 "copy": false, 00:37:35.609 "nvme_iov_md": false 00:37:35.609 }, 00:37:35.609 "driver_specific": { 00:37:35.609 "lvol": { 00:37:35.609 "lvol_store_uuid": "3b8da7ee-a28d-4128-9edd-cecce6c6fe1b", 00:37:35.609 "base_bdev": "aio_bdev", 00:37:35.609 "thin_provision": false, 00:37:35.609 "num_allocated_clusters": 38, 00:37:35.609 "snapshot": false, 00:37:35.609 "clone": false, 00:37:35.609 "esnap_clone": false 00:37:35.609 } 00:37:35.609 } 00:37:35.609 } 00:37:35.609 ] 00:37:35.609 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:37:35.609 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:35.609 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:37:35.869 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:37:35.869 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:35.869 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:37:36.129 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:37:36.129 12:21:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3db1a9bc-ecca-45da-8d1f-126f94f9ea25 00:37:36.129 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3b8da7ee-a28d-4128-9edd-cecce6c6fe1b 00:37:36.389 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:37:36.650 00:37:36.650 real 0m17.730s 00:37:36.650 user 0m35.688s 00:37:36.650 sys 0m3.023s 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:37:36.650 ************************************ 00:37:36.650 END TEST lvs_grow_dirty 00:37:36.650 ************************************ 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:37:36.650 nvmf_trace.0 00:37:36.650 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:36.651 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:36.651 rmmod nvme_tcp 00:37:36.912 rmmod nvme_fabrics 00:37:36.912 rmmod nvme_keyring 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 1601237 ']' 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 1601237 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1601237 ']' 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1601237 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1601237 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1601237' 00:37:36.912 killing process with pid 1601237 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1601237 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1601237 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:36.912 12:21:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:37:39.458 00:37:39.458 real 0m45.239s 00:37:39.458 user 0m54.420s 00:37:39.458 sys 0m10.681s 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:37:39.458 ************************************ 00:37:39.458 END TEST nvmf_lvs_grow 00:37:39.458 ************************************ 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:39.458 ************************************ 00:37:39.458 START TEST nvmf_bdev_io_wait 00:37:39.458 ************************************ 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:37:39.458 * Looking for test storage... 00:37:39.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:39.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.458 --rc genhtml_branch_coverage=1 00:37:39.458 --rc genhtml_function_coverage=1 00:37:39.458 --rc genhtml_legend=1 00:37:39.458 --rc geninfo_all_blocks=1 00:37:39.458 --rc geninfo_unexecuted_blocks=1 00:37:39.458 00:37:39.458 ' 00:37:39.458 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:39.458 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.458 --rc genhtml_branch_coverage=1 00:37:39.458 --rc genhtml_function_coverage=1 00:37:39.458 --rc genhtml_legend=1 00:37:39.458 --rc geninfo_all_blocks=1 00:37:39.459 --rc geninfo_unexecuted_blocks=1 00:37:39.459 00:37:39.459 ' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:39.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.459 --rc genhtml_branch_coverage=1 00:37:39.459 --rc genhtml_function_coverage=1 00:37:39.459 --rc genhtml_legend=1 00:37:39.459 --rc geninfo_all_blocks=1 00:37:39.459 --rc geninfo_unexecuted_blocks=1 00:37:39.459 00:37:39.459 ' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:39.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.459 --rc genhtml_branch_coverage=1 00:37:39.459 --rc genhtml_function_coverage=1 00:37:39.459 --rc genhtml_legend=1 00:37:39.459 --rc geninfo_all_blocks=1 00:37:39.459 --rc geninfo_unexecuted_blocks=1 00:37:39.459 00:37:39.459 ' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:37:39.459 12:21:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:47.600 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:37:47.601 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:37:47.601 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:37:47.601 Found net devices under 0000:4b:00.0: cvl_0_0 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:37:47.601 Found net devices under 0000:4b:00.1: cvl_0_1 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:47.601 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:47.602 10.0.0.1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:47.602 10.0.0.2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:47.602 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:47.602 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.526 ms 00:37:47.602 00:37:47.602 --- 10.0.0.1 ping statistics --- 00:37:47.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.602 rtt min/avg/max/mdev = 0.526/0.526/0.526/0.000 ms 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:47.602 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:47.602 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:37:47.602 00:37:47.602 --- 10.0.0.2 ping statistics --- 00:37:47.602 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:47.602 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:47.602 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.603 12:21:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:37:47.603 ' 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=1606702 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 1606702 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1606702 ']' 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:47.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:47.603 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:47.603 [2024-12-05 12:21:12.135486] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:47.603 [2024-12-05 12:21:12.136610] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:47.603 [2024-12-05 12:21:12.136660] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:47.603 [2024-12-05 12:21:12.241892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:47.603 [2024-12-05 12:21:12.296974] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:47.603 [2024-12-05 12:21:12.297027] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:47.603 [2024-12-05 12:21:12.297035] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:47.603 [2024-12-05 12:21:12.297043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:47.603 [2024-12-05 12:21:12.297049] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:47.603 [2024-12-05 12:21:12.299127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:47.603 [2024-12-05 12:21:12.299286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:47.603 [2024-12-05 12:21:12.299462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:47.603 [2024-12-05 12:21:12.299467] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:47.603 [2024-12-05 12:21:12.300071] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 [2024-12-05 12:21:13.064340] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:48.176 [2024-12-05 12:21:13.065078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:48.176 [2024-12-05 12:21:13.065207] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:48.176 [2024-12-05 12:21:13.065352] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 [2024-12-05 12:21:13.076581] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 Malloc0 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:48.176 [2024-12-05 12:21:13.148730] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1606913 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1606915 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:48.176 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:48.176 { 00:37:48.176 "params": { 00:37:48.176 "name": "Nvme$subsystem", 00:37:48.176 "trtype": "$TEST_TRANSPORT", 00:37:48.176 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:48.176 "adrfam": "ipv4", 00:37:48.176 "trsvcid": "$NVMF_PORT", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:48.177 "hdgst": ${hdgst:-false}, 00:37:48.177 "ddgst": ${ddgst:-false} 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 } 00:37:48.177 EOF 00:37:48.177 )") 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1606917 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:48.177 { 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme$subsystem", 00:37:48.177 "trtype": "$TEST_TRANSPORT", 00:37:48.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "$NVMF_PORT", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:48.177 "hdgst": ${hdgst:-false}, 00:37:48.177 "ddgst": ${ddgst:-false} 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 } 00:37:48.177 EOF 00:37:48.177 )") 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1606921 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:48.177 { 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme$subsystem", 00:37:48.177 "trtype": "$TEST_TRANSPORT", 00:37:48.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "$NVMF_PORT", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:48.177 "hdgst": ${hdgst:-false}, 00:37:48.177 "ddgst": ${ddgst:-false} 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 } 00:37:48.177 EOF 00:37:48.177 )") 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:48.177 { 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme$subsystem", 00:37:48.177 "trtype": "$TEST_TRANSPORT", 00:37:48.177 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "$NVMF_PORT", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:48.177 "hdgst": ${hdgst:-false}, 00:37:48.177 "ddgst": ${ddgst:-false} 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 } 00:37:48.177 EOF 00:37:48.177 )") 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1606913 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme1", 00:37:48.177 "trtype": "tcp", 00:37:48.177 "traddr": "10.0.0.2", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "4420", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.177 "hdgst": false, 00:37:48.177 "ddgst": false 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 }' 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme1", 00:37:48.177 "trtype": "tcp", 00:37:48.177 "traddr": "10.0.0.2", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "4420", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.177 "hdgst": false, 00:37:48.177 "ddgst": false 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 }' 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme1", 00:37:48.177 "trtype": "tcp", 00:37:48.177 "traddr": "10.0.0.2", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "4420", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.177 "hdgst": false, 00:37:48.177 "ddgst": false 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 }' 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:37:48.177 12:21:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:48.177 "params": { 00:37:48.177 "name": "Nvme1", 00:37:48.177 "trtype": "tcp", 00:37:48.177 "traddr": "10.0.0.2", 00:37:48.177 "adrfam": "ipv4", 00:37:48.177 "trsvcid": "4420", 00:37:48.177 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:48.177 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:48.177 "hdgst": false, 00:37:48.177 "ddgst": false 00:37:48.177 }, 00:37:48.177 "method": "bdev_nvme_attach_controller" 00:37:48.177 }' 00:37:48.177 [2024-12-05 12:21:13.207659] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:48.177 [2024-12-05 12:21:13.207736] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:37:48.177 [2024-12-05 12:21:13.210292] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:48.177 [2024-12-05 12:21:13.210360] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:37:48.177 [2024-12-05 12:21:13.210491] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:48.177 [2024-12-05 12:21:13.210553] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:37:48.177 [2024-12-05 12:21:13.217498] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:37:48.177 [2024-12-05 12:21:13.217591] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:37:48.438 [2024-12-05 12:21:13.432614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.438 [2024-12-05 12:21:13.475687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:48.698 [2024-12-05 12:21:13.500820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.698 [2024-12-05 12:21:13.538696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:37:48.698 [2024-12-05 12:21:13.571821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.698 [2024-12-05 12:21:13.610056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:37:48.698 [2024-12-05 12:21:13.661734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:48.698 [2024-12-05 12:21:13.703640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:37:48.960 Running I/O for 1 seconds... 00:37:48.960 Running I/O for 1 seconds... 00:37:48.960 Running I/O for 1 seconds... 00:37:48.960 Running I/O for 1 seconds... 00:37:49.902 7326.00 IOPS, 28.62 MiB/s [2024-12-05T11:21:14.951Z] 11537.00 IOPS, 45.07 MiB/s 00:37:49.902 Latency(us) 00:37:49.902 [2024-12-05T11:21:14.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.902 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:37:49.902 Nvme1n1 : 1.02 7364.73 28.77 0.00 0.00 17263.31 4969.81 20753.07 00:37:49.902 [2024-12-05T11:21:14.951Z] =================================================================================================================== 00:37:49.902 [2024-12-05T11:21:14.951Z] Total : 7364.73 28.77 0.00 0.00 17263.31 4969.81 20753.07 00:37:49.902 00:37:49.902 Latency(us) 00:37:49.902 [2024-12-05T11:21:14.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.902 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:37:49.902 Nvme1n1 : 1.01 11575.87 45.22 0.00 0.00 11012.39 6034.77 16056.32 00:37:49.902 [2024-12-05T11:21:14.951Z] =================================================================================================================== 00:37:49.902 [2024-12-05T11:21:14.951Z] Total : 11575.87 45.22 0.00 0.00 11012.39 6034.77 16056.32 00:37:49.902 7183.00 IOPS, 28.06 MiB/s 00:37:49.902 Latency(us) 00:37:49.902 [2024-12-05T11:21:14.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:49.902 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:37:49.902 Nvme1n1 : 1.01 7301.84 28.52 0.00 0.00 17480.39 3986.77 33204.91 00:37:49.902 [2024-12-05T11:21:14.951Z] =================================================================================================================== 00:37:49.902 [2024-12-05T11:21:14.951Z] Total : 7301.84 28.52 0.00 0.00 17480.39 3986.77 33204.91 00:37:49.902 12:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1606915 00:37:49.902 12:21:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1606917 00:37:50.162 178320.00 IOPS, 696.56 MiB/s 00:37:50.162 Latency(us) 00:37:50.162 [2024-12-05T11:21:15.211Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:50.162 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:37:50.162 Nvme1n1 : 1.00 177966.37 695.18 0.00 0.00 715.17 302.08 1966.08 00:37:50.162 [2024-12-05T11:21:15.211Z] =================================================================================================================== 00:37:50.162 [2024-12-05T11:21:15.211Z] Total : 177966.37 695.18 0.00 0.00 715.17 302.08 1966.08 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1606921 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:50.162 rmmod nvme_tcp 00:37:50.162 rmmod nvme_fabrics 00:37:50.162 rmmod nvme_keyring 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:50.162 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 1606702 ']' 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 1606702 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1606702 ']' 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1606702 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:50.163 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1606702 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1606702' 00:37:50.423 killing process with pid 1606702 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1606702 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1606702 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:50.423 12:21:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:37:52.969 00:37:52.969 real 0m13.344s 00:37:52.969 user 0m15.984s 00:37:52.969 sys 0m7.836s 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:37:52.969 ************************************ 00:37:52.969 END TEST nvmf_bdev_io_wait 00:37:52.969 ************************************ 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:52.969 ************************************ 00:37:52.969 START TEST nvmf_queue_depth 00:37:52.969 ************************************ 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:37:52.969 * Looking for test storage... 00:37:52.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.969 --rc genhtml_branch_coverage=1 00:37:52.969 --rc genhtml_function_coverage=1 00:37:52.969 --rc genhtml_legend=1 00:37:52.969 --rc geninfo_all_blocks=1 00:37:52.969 --rc geninfo_unexecuted_blocks=1 00:37:52.969 00:37:52.969 ' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.969 --rc genhtml_branch_coverage=1 00:37:52.969 --rc genhtml_function_coverage=1 00:37:52.969 --rc genhtml_legend=1 00:37:52.969 --rc geninfo_all_blocks=1 00:37:52.969 --rc geninfo_unexecuted_blocks=1 00:37:52.969 00:37:52.969 ' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.969 --rc genhtml_branch_coverage=1 00:37:52.969 --rc genhtml_function_coverage=1 00:37:52.969 --rc genhtml_legend=1 00:37:52.969 --rc geninfo_all_blocks=1 00:37:52.969 --rc geninfo_unexecuted_blocks=1 00:37:52.969 00:37:52.969 ' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:52.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:52.969 --rc genhtml_branch_coverage=1 00:37:52.969 --rc genhtml_function_coverage=1 00:37:52.969 --rc genhtml_legend=1 00:37:52.969 --rc geninfo_all_blocks=1 00:37:52.969 --rc geninfo_unexecuted_blocks=1 00:37:52.969 00:37:52.969 ' 00:37:52.969 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:37:52.970 12:21:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:01.114 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:01.114 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:01.114 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:01.114 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:38:01.114 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:01.115 12:21:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:01.115 10.0.0.1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:01.115 10.0.0.2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:01.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:01.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.661 ms 00:38:01.115 00:38:01.115 --- 10.0.0.1 ping statistics --- 00:38:01.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.115 rtt min/avg/max/mdev = 0.661/0.661/0.661/0.000 ms 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:01.115 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:01.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:01.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:38:01.115 00:38:01.115 --- 10.0.0.2 ping statistics --- 00:38:01.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:01.115 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:01.116 ' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=1611599 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 1611599 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1611599 ']' 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:01.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.116 12:21:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.116 [2024-12-05 12:21:25.417148] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:01.116 [2024-12-05 12:21:25.418285] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:01.116 [2024-12-05 12:21:25.418333] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:01.116 [2024-12-05 12:21:25.518655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.116 [2024-12-05 12:21:25.569024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:01.116 [2024-12-05 12:21:25.569069] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:01.116 [2024-12-05 12:21:25.569077] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:01.116 [2024-12-05 12:21:25.569085] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:01.116 [2024-12-05 12:21:25.569091] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:01.116 [2024-12-05 12:21:25.569837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:01.116 [2024-12-05 12:21:25.646882] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:01.116 [2024-12-05 12:21:25.647158] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 [2024-12-05 12:21:26.278705] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 Malloc0 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 [2024-12-05 12:21:26.362862] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1611655 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1611655 /var/tmp/bdevperf.sock 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1611655 ']' 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:01.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:01.378 12:21:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:01.378 [2024-12-05 12:21:26.421566] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:01.378 [2024-12-05 12:21:26.421627] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1611655 ] 00:38:01.639 [2024-12-05 12:21:26.515073] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:01.639 [2024-12-05 12:21:26.568308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.210 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.210 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:38:02.210 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:38:02.210 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.210 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:02.470 NVMe0n1 00:38:02.470 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.470 12:21:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:02.828 Running I/O for 10 seconds... 00:38:04.751 8192.00 IOPS, 32.00 MiB/s [2024-12-05T11:21:30.744Z] 8465.00 IOPS, 33.07 MiB/s [2024-12-05T11:21:31.687Z] 9225.00 IOPS, 36.04 MiB/s [2024-12-05T11:21:32.629Z] 10239.25 IOPS, 40.00 MiB/s [2024-12-05T11:21:34.014Z] 10861.80 IOPS, 42.43 MiB/s [2024-12-05T11:21:34.954Z] 11297.00 IOPS, 44.13 MiB/s [2024-12-05T11:21:35.893Z] 11621.86 IOPS, 45.40 MiB/s [2024-12-05T11:21:36.833Z] 11854.62 IOPS, 46.31 MiB/s [2024-12-05T11:21:37.773Z] 12044.78 IOPS, 47.05 MiB/s [2024-12-05T11:21:37.773Z] 12187.50 IOPS, 47.61 MiB/s 00:38:12.724 Latency(us) 00:38:12.724 [2024-12-05T11:21:37.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.724 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:38:12.724 Verification LBA range: start 0x0 length 0x4000 00:38:12.724 NVMe0n1 : 10.05 12219.92 47.73 0.00 0.00 83519.34 21954.56 78206.29 00:38:12.724 [2024-12-05T11:21:37.773Z] =================================================================================================================== 00:38:12.724 [2024-12-05T11:21:37.773Z] Total : 12219.92 47.73 0.00 0.00 83519.34 21954.56 78206.29 00:38:12.724 { 00:38:12.724 "results": [ 00:38:12.724 { 00:38:12.724 "job": "NVMe0n1", 00:38:12.724 "core_mask": "0x1", 00:38:12.724 "workload": "verify", 00:38:12.724 "status": "finished", 00:38:12.724 "verify_range": { 00:38:12.724 "start": 0, 00:38:12.724 "length": 16384 00:38:12.724 }, 00:38:12.724 "queue_depth": 1024, 00:38:12.724 "io_size": 4096, 00:38:12.724 "runtime": 10.052687, 00:38:12.724 "iops": 12219.91692370408, 00:38:12.724 "mibps": 47.734050483219065, 00:38:12.724 "io_failed": 0, 00:38:12.724 "io_timeout": 0, 00:38:12.724 "avg_latency_us": 83519.34335088963, 00:38:12.724 "min_latency_us": 21954.56, 00:38:12.724 "max_latency_us": 78206.29333333333 00:38:12.724 } 00:38:12.724 ], 00:38:12.724 "core_count": 1 00:38:12.724 } 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1611655 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1611655 ']' 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1611655 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611655 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611655' 00:38:12.724 killing process with pid 1611655 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1611655 00:38:12.724 Received shutdown signal, test time was about 10.000000 seconds 00:38:12.724 00:38:12.724 Latency(us) 00:38:12.724 [2024-12-05T11:21:37.773Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:12.724 [2024-12-05T11:21:37.773Z] =================================================================================================================== 00:38:12.724 [2024-12-05T11:21:37.773Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:12.724 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1611655 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:12.985 rmmod nvme_tcp 00:38:12.985 rmmod nvme_fabrics 00:38:12.985 rmmod nvme_keyring 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 1611599 ']' 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 1611599 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1611599 ']' 00:38:12.985 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1611599 00:38:12.986 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:38:12.986 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.986 12:21:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1611599 00:38:12.986 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:12.986 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:12.986 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1611599' 00:38:12.986 killing process with pid 1611599 00:38:12.986 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1611599 00:38:12.986 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1611599 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:13.246 12:21:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:38:15.158 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:38:15.419 00:38:15.419 real 0m22.666s 00:38:15.419 user 0m24.930s 00:38:15.419 sys 0m7.498s 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:38:15.419 ************************************ 00:38:15.419 END TEST nvmf_queue_depth 00:38:15.419 ************************************ 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:15.419 ************************************ 00:38:15.419 START TEST nvmf_target_multipath 00:38:15.419 ************************************ 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:38:15.419 * Looking for test storage... 00:38:15.419 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:38:15.419 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.681 --rc genhtml_branch_coverage=1 00:38:15.681 --rc genhtml_function_coverage=1 00:38:15.681 --rc genhtml_legend=1 00:38:15.681 --rc geninfo_all_blocks=1 00:38:15.681 --rc geninfo_unexecuted_blocks=1 00:38:15.681 00:38:15.681 ' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.681 --rc genhtml_branch_coverage=1 00:38:15.681 --rc genhtml_function_coverage=1 00:38:15.681 --rc genhtml_legend=1 00:38:15.681 --rc geninfo_all_blocks=1 00:38:15.681 --rc geninfo_unexecuted_blocks=1 00:38:15.681 00:38:15.681 ' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.681 --rc genhtml_branch_coverage=1 00:38:15.681 --rc genhtml_function_coverage=1 00:38:15.681 --rc genhtml_legend=1 00:38:15.681 --rc geninfo_all_blocks=1 00:38:15.681 --rc geninfo_unexecuted_blocks=1 00:38:15.681 00:38:15.681 ' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:15.681 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.681 --rc genhtml_branch_coverage=1 00:38:15.681 --rc genhtml_function_coverage=1 00:38:15.681 --rc genhtml_legend=1 00:38:15.681 --rc geninfo_all_blocks=1 00:38:15.681 --rc geninfo_unexecuted_blocks=1 00:38:15.681 00:38:15.681 ' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.681 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:38:15.682 12:21:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:23.827 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:23.827 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:23.827 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:23.827 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:23.827 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:23.828 10.0.0.1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:23.828 10.0.0.2 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:23.828 12:21:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:23.828 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:23.828 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:23.829 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.696 ms 00:38:23.829 00:38:23.829 --- 10.0.0.1 ping statistics --- 00:38:23.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.829 rtt min/avg/max/mdev = 0.696/0.696/0.696/0.000 ms 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:23.829 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:23.829 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.323 ms 00:38:23.829 00:38:23.829 --- 10.0.0.2 ping statistics --- 00:38:23.829 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:23.829 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:23.829 ' 00:38:23.829 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:38:23.830 only one NIC for nvmf test 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:23.830 rmmod nvme_tcp 00:38:23.830 rmmod nvme_fabrics 00:38:23.830 rmmod nvme_keyring 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:23.830 12:21:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:38:25.746 00:38:25.746 real 0m10.039s 00:38:25.746 user 0m2.290s 00:38:25.746 sys 0m5.716s 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:38:25.746 ************************************ 00:38:25.746 END TEST nvmf_target_multipath 00:38:25.746 ************************************ 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:25.746 ************************************ 00:38:25.746 START TEST nvmf_zcopy 00:38:25.746 ************************************ 00:38:25.746 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:38:25.746 * Looking for test storage... 00:38:25.747 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:25.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.747 --rc genhtml_branch_coverage=1 00:38:25.747 --rc genhtml_function_coverage=1 00:38:25.747 --rc genhtml_legend=1 00:38:25.747 --rc geninfo_all_blocks=1 00:38:25.747 --rc geninfo_unexecuted_blocks=1 00:38:25.747 00:38:25.747 ' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:25.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.747 --rc genhtml_branch_coverage=1 00:38:25.747 --rc genhtml_function_coverage=1 00:38:25.747 --rc genhtml_legend=1 00:38:25.747 --rc geninfo_all_blocks=1 00:38:25.747 --rc geninfo_unexecuted_blocks=1 00:38:25.747 00:38:25.747 ' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:25.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.747 --rc genhtml_branch_coverage=1 00:38:25.747 --rc genhtml_function_coverage=1 00:38:25.747 --rc genhtml_legend=1 00:38:25.747 --rc geninfo_all_blocks=1 00:38:25.747 --rc geninfo_unexecuted_blocks=1 00:38:25.747 00:38:25.747 ' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:25.747 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:25.747 --rc genhtml_branch_coverage=1 00:38:25.747 --rc genhtml_function_coverage=1 00:38:25.747 --rc genhtml_legend=1 00:38:25.747 --rc geninfo_all_blocks=1 00:38:25.747 --rc geninfo_unexecuted_blocks=1 00:38:25.747 00:38:25.747 ' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:25.747 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:38:25.748 12:21:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.882 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:38:33.883 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:38:33.883 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:38:33.883 Found net devices under 0000:4b:00.0: cvl_0_0 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:38:33.883 Found net devices under 0000:4b:00.1: cvl_0_1 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:38:33.883 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:38:33.884 10.0.0.1 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:38:33.884 10.0.0.2 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:38:33.884 12:21:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:38:33.884 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.884 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.632 ms 00:38:33.884 00:38:33.884 --- 10.0.0.1 ping statistics --- 00:38:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.884 rtt min/avg/max/mdev = 0.632/0.632/0.632/0.000 ms 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:38:33.884 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.884 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.292 ms 00:38:33.884 00:38:33.884 --- 10.0.0.2 ping statistics --- 00:38:33.884 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.884 rtt min/avg/max/mdev = 0.292/0.292/0.292/0.000 ms 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:38:33.884 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:38:33.885 ' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=1622343 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 1622343 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1622343 ']' 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.885 12:21:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:33.885 [2024-12-05 12:21:58.355520] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:33.885 [2024-12-05 12:21:58.357272] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:33.885 [2024-12-05 12:21:58.357352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.885 [2024-12-05 12:21:58.458199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:33.885 [2024-12-05 12:21:58.507489] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.885 [2024-12-05 12:21:58.507535] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.885 [2024-12-05 12:21:58.507544] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.885 [2024-12-05 12:21:58.507551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.885 [2024-12-05 12:21:58.507557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.885 [2024-12-05 12:21:58.508293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.885 [2024-12-05 12:21:58.585589] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:33.885 [2024-12-05 12:21:58.585861] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:34.146 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:34.147 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:38:34.147 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:38:34.147 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:34.147 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 [2024-12-05 12:21:59.221148] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 [2024-12-05 12:21:59.249500] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 malloc0 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:38:34.408 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:38:34.409 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:34.409 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:34.409 { 00:38:34.409 "params": { 00:38:34.409 "name": "Nvme$subsystem", 00:38:34.409 "trtype": "$TEST_TRANSPORT", 00:38:34.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:34.409 "adrfam": "ipv4", 00:38:34.409 "trsvcid": "$NVMF_PORT", 00:38:34.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:34.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:34.409 "hdgst": ${hdgst:-false}, 00:38:34.409 "ddgst": ${ddgst:-false} 00:38:34.409 }, 00:38:34.409 "method": "bdev_nvme_attach_controller" 00:38:34.409 } 00:38:34.409 EOF 00:38:34.409 )") 00:38:34.409 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:38:34.409 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:38:34.409 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:38:34.409 12:21:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:34.409 "params": { 00:38:34.409 "name": "Nvme1", 00:38:34.409 "trtype": "tcp", 00:38:34.409 "traddr": "10.0.0.2", 00:38:34.409 "adrfam": "ipv4", 00:38:34.409 "trsvcid": "4420", 00:38:34.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:34.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:34.409 "hdgst": false, 00:38:34.409 "ddgst": false 00:38:34.409 }, 00:38:34.409 "method": "bdev_nvme_attach_controller" 00:38:34.409 }' 00:38:34.409 [2024-12-05 12:21:59.353396] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:34.409 [2024-12-05 12:21:59.353471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622395 ] 00:38:34.409 [2024-12-05 12:21:59.445425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:34.670 [2024-12-05 12:21:59.498605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:34.670 Running I/O for 10 seconds... 00:38:36.627 6412.00 IOPS, 50.09 MiB/s [2024-12-05T11:22:03.060Z] 6469.50 IOPS, 50.54 MiB/s [2024-12-05T11:22:04.003Z] 6494.00 IOPS, 50.73 MiB/s [2024-12-05T11:22:04.944Z] 6499.75 IOPS, 50.78 MiB/s [2024-12-05T11:22:05.883Z] 6873.00 IOPS, 53.70 MiB/s [2024-12-05T11:22:06.824Z] 7345.17 IOPS, 57.38 MiB/s [2024-12-05T11:22:07.764Z] 7679.71 IOPS, 60.00 MiB/s [2024-12-05T11:22:08.704Z] 7935.38 IOPS, 62.00 MiB/s [2024-12-05T11:22:10.086Z] 8127.44 IOPS, 63.50 MiB/s [2024-12-05T11:22:10.086Z] 8282.60 IOPS, 64.71 MiB/s 00:38:45.037 Latency(us) 00:38:45.037 [2024-12-05T11:22:10.086Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:45.037 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:38:45.037 Verification LBA range: start 0x0 length 0x1000 00:38:45.037 Nvme1n1 : 10.01 8285.87 64.73 0.00 0.00 15401.30 2402.99 27415.89 00:38:45.037 [2024-12-05T11:22:10.086Z] =================================================================================================================== 00:38:45.037 [2024-12-05T11:22:10.086Z] Total : 8285.87 64.73 0.00 0.00 15401.30 2402.99 27415.89 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1624375 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:38:45.037 { 00:38:45.037 "params": { 00:38:45.037 "name": "Nvme$subsystem", 00:38:45.037 "trtype": "$TEST_TRANSPORT", 00:38:45.037 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:45.037 "adrfam": "ipv4", 00:38:45.037 "trsvcid": "$NVMF_PORT", 00:38:45.037 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:45.037 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:45.037 "hdgst": ${hdgst:-false}, 00:38:45.037 "ddgst": ${ddgst:-false} 00:38:45.037 }, 00:38:45.037 "method": "bdev_nvme_attach_controller" 00:38:45.037 } 00:38:45.037 EOF 00:38:45.037 )") 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:38:45.037 [2024-12-05 12:22:09.808708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.808735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:38:45.037 12:22:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:38:45.037 "params": { 00:38:45.037 "name": "Nvme1", 00:38:45.037 "trtype": "tcp", 00:38:45.037 "traddr": "10.0.0.2", 00:38:45.037 "adrfam": "ipv4", 00:38:45.037 "trsvcid": "4420", 00:38:45.037 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:38:45.037 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:38:45.037 "hdgst": false, 00:38:45.037 "ddgst": false 00:38:45.037 }, 00:38:45.037 "method": "bdev_nvme_attach_controller" 00:38:45.037 }' 00:38:45.037 [2024-12-05 12:22:09.820676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.820684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.832673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.832680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.844673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.844680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.856673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.856680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.860750] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:38:45.037 [2024-12-05 12:22:09.860797] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1624375 ] 00:38:45.037 [2024-12-05 12:22:09.868673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.868680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.880673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.880679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.892673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.892680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.904672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.904679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.916673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.916679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.037 [2024-12-05 12:22:09.928673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.037 [2024-12-05 12:22:09.928679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:09.940673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:09.940679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:09.941715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:45.038 [2024-12-05 12:22:09.952674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:09.952682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:09.964674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:09.964683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:09.971111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:45.038 [2024-12-05 12:22:09.976672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:09.976683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:09.988679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:09.988690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.000676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.000688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.012678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.012691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.024674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.024683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.036673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.036681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.048680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.048695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.060676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.060687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.072675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.072684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.038 [2024-12-05 12:22:10.084673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.038 [2024-12-05 12:22:10.084680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.096673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.096681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.108674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.108684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.120674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.120684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.132673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.132680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.144673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.144680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.156673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.156681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.168673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.168683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.180673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.180680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.192672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.192679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.204673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.204685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.216673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.216681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.228673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.228680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.240672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.240680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.252673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.298 [2024-12-05 12:22:10.252681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.298 [2024-12-05 12:22:10.264678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.299 [2024-12-05 12:22:10.264692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.299 Running I/O for 5 seconds... 00:38:45.299 [2024-12-05 12:22:10.280659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.299 [2024-12-05 12:22:10.280674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.299 [2024-12-05 12:22:10.293677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.299 [2024-12-05 12:22:10.293692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.299 [2024-12-05 12:22:10.307856] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.299 [2024-12-05 12:22:10.307871] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.299 [2024-12-05 12:22:10.321244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.299 [2024-12-05 12:22:10.321259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.299 [2024-12-05 12:22:10.335716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.299 [2024-12-05 12:22:10.335731] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.348754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.348769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.361353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.361368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.375542] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.375557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.388582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.388597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.401714] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.401728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.415552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.415567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.428464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.428479] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.441896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.441910] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.455658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.455677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.468825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.468839] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.481757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.481771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.495775] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.495790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.508759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.508773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.521527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.521541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.535791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.535806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.549216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.549230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.558 [2024-12-05 12:22:10.563532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.558 [2024-12-05 12:22:10.563547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.559 [2024-12-05 12:22:10.576435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.559 [2024-12-05 12:22:10.576450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.559 [2024-12-05 12:22:10.589350] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.559 [2024-12-05 12:22:10.589364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.559 [2024-12-05 12:22:10.604315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.559 [2024-12-05 12:22:10.604330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.617430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.617445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.632393] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.632408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.645472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.645487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.659744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.659759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.672851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.672865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.685503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.685517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.700272] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.700286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.712859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.712874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.725560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.725574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.739973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.739988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.753110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.753125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.767797] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.767812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.780939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.780953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.796231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.796246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.809259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.809273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.824096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.824111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.837075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.837089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.851667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.851682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:45.818 [2024-12-05 12:22:10.864686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:45.818 [2024-12-05 12:22:10.864700] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.877749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.877762] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.892084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.892099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.904961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.904975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.919538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.919552] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.932384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.932398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.944953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.944966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.959678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.959691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.972766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.972780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.985769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.985783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:10.999964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:10.999978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.013254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.013268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.028018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.028032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.041005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.041019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.056341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.056355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.069175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.069189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.083816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.083831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.096884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.096898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.110058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.110073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.079 [2024-12-05 12:22:11.124154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.079 [2024-12-05 12:22:11.124169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.338 [2024-12-05 12:22:11.137359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.137373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.151681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.151695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.164924] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.164938] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.179763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.179778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.192801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.192816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.205236] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.205250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.219955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.219969] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.233088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.233102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.248170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.248184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.261293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.261307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 19083.00 IOPS, 149.09 MiB/s [2024-12-05T11:22:11.388Z] [2024-12-05 12:22:11.276283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.276297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.289332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.289346] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.303412] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.303426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.316647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.316662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.329094] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.329108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.343870] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.343884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.356648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.356662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.369298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.369311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.339 [2024-12-05 12:22:11.383859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.339 [2024-12-05 12:22:11.383873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.396908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.396922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.409464] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.409478] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.423824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.423838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.436829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.436843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.449666] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.449680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.464129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.464144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.477145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.477163] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.491952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.491966] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.505177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.505190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.519806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.519820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.533103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.533117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.547744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.547758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.560823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.560837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.573373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.573386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.587795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.587810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.600702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.600716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.613806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.613820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.627822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.627837] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.598 [2024-12-05 12:22:11.640845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.598 [2024-12-05 12:22:11.640859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.653699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.653713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.668136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.668150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.681188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.681202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.695632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.695646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.708403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.708417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.721213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.721227] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.735651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.735668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.748631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.748646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.761261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.761275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.776175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.776189] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.789402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.789416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.804074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.804089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.817264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.817278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.831594] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.831608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.844550] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.844565] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.857669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.857684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.872147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.872162] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.885055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.885069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:46.858 [2024-12-05 12:22:11.899850] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:46.858 [2024-12-05 12:22:11.899865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.913097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.913111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.927976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.927991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.941316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.941331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.955893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.955908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.968589] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.968604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.981476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.981490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:11.995738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:11.995757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.008905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.008919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.021381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.021395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.036216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.036231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.049104] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.049118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.063701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.063716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.076554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.076569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.089782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.089796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.104160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.104175] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.116935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.116949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.132093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.132108] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.145452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.145470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.118 [2024-12-05 12:22:12.159904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.118 [2024-12-05 12:22:12.159919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.172956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.172971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.187883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.187897] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.200688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.200702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.213154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.213169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.228505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.228520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.241524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.241539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.255734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.255749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.268896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.268911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 19128.00 IOPS, 149.44 MiB/s [2024-12-05T11:22:12.428Z] [2024-12-05 12:22:12.281704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.281719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.296026] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.296040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.308748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.308763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.321703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.321717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.335475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.335490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.348425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.348439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.360884] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.360898] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.373523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.373537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.387894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.387908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.400590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.400605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.413863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.413877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.379 [2024-12-05 12:22:12.427429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.379 [2024-12-05 12:22:12.427444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.440488] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.440503] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.453736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.453750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.468269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.468284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.481114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.481128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.496008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.496023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.509180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.509195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.524260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.524275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.537197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.537211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.551501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.551516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.564596] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.564610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.577659] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.577674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.592343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.592357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.604981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.604995] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.619505] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.619520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.632519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.632533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.645479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.645493] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.660083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.660097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.673060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.673074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.640 [2024-12-05 12:22:12.687748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.640 [2024-12-05 12:22:12.687763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.901 [2024-12-05 12:22:12.701010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.901 [2024-12-05 12:22:12.701024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.901 [2024-12-05 12:22:12.715446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.901 [2024-12-05 12:22:12.715464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.901 [2024-12-05 12:22:12.728527] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.901 [2024-12-05 12:22:12.728541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.901 [2024-12-05 12:22:12.741503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.901 [2024-12-05 12:22:12.741518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.755947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.755962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.768938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.768951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.783539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.783553] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.796560] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.796574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.809915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.809929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.824262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.824276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.837058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.837072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.851375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.851390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.864411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.864425] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.877427] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.877441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.891548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.891563] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.904248] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.904262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.916679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.916693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.929215] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.929229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:47.902 [2024-12-05 12:22:12.943229] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:47.902 [2024-12-05 12:22:12.943243] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:12.956220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:12.956235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:12.969526] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:12.969540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:12.983525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:12.983539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:12.996509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:12.996523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:13.009478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:13.009496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:13.023894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:13.023908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:13.036845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:13.036860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:13.049622] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.162 [2024-12-05 12:22:13.049636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.162 [2024-12-05 12:22:13.063563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.063578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.076571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.076585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.089886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.089900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.103968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.103983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.117188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.117201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.131914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.131928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.144936] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.144949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.159491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.159506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.172601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.172615] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.185518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.185532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.163 [2024-12-05 12:22:13.199956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.163 [2024-12-05 12:22:13.199971] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.212750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.212765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.225396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.225410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.239845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.239859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.253068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.253083] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.268045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.268064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 19153.33 IOPS, 149.64 MiB/s [2024-12-05T11:22:13.472Z] [2024-12-05 12:22:13.281021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.281035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.295697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.295711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.308516] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.308531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.321421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.321435] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.336076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.336090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.349263] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.349276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.363987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.364001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.377315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.377328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.391575] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.391589] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.404555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.404570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.417354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.417368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.432515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.432530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.445282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.445295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.423 [2024-12-05 12:22:13.459799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.423 [2024-12-05 12:22:13.459813] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.472833] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.472848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.485559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.485573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.500479] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.500494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.513641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.513654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.527836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.527854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.540598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.540612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.553180] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.553194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.568053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.568068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.581137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.581150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.595581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.595595] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.608655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.608669] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.621511] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.621525] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.636168] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.636183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.649181] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.649195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.663880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.663895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.677154] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.677169] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.691838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.691852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.705003] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.705017] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.719442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.719464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.685 [2024-12-05 12:22:13.732568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.685 [2024-12-05 12:22:13.732582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.946 [2024-12-05 12:22:13.745406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.946 [2024-12-05 12:22:13.745420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.946 [2024-12-05 12:22:13.759712] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.946 [2024-12-05 12:22:13.759727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.772602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.772616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.785446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.785466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.800284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.800298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.813144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.813158] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.828208] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.828223] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.841541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.841555] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.855862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.855877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.868751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.868766] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.881327] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.881341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.895809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.895824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.908837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.908852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.921379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.921393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.935832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.935847] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.948695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.948710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.961607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.961621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.975961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.975976] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:48.947 [2024-12-05 12:22:13.988905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:48.947 [2024-12-05 12:22:13.988919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.001925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.001940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.015210] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.015224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.028851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.028865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.041765] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.041780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.055642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.055657] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.068353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.068367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.081964] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.081979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.095760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.095775] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.108790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.108805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.122022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.122036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.135866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.135881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.148975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.148989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.163773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.163788] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.176930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.176944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.191816] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.191831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.204809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.204824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.217484] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.217498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.231972] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.231986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.208 [2024-12-05 12:22:14.244885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.208 [2024-12-05 12:22:14.244899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.257709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.257724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.271998] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.272013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 19146.50 IOPS, 149.58 MiB/s [2024-12-05T11:22:14.517Z] [2024-12-05 12:22:14.285078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.285096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.299812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.299826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.312660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.312674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.325611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.325625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.339778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.339793] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.352738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.352753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.365376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.365390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.380313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.380327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.393383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.393397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.408239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.408254] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.421643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.421658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.435905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.435919] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.448655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.448670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.462051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.462065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.475923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.475937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.488744] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.488758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.501695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.501709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.468 [2024-12-05 12:22:14.515790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.468 [2024-12-05 12:22:14.515805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.528724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.528739] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.542246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.542264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.556264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.556279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.569259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.569273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.583621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.583636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.596609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.596624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.609813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.609827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.623736] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.623751] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.636827] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.636841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.649604] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.649619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.663783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.663797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.676889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.676903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.689967] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.689981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.704211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.704226] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.717389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.717403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.731897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.731912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.744791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.744805] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.757430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.757444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.729 [2024-12-05 12:22:14.772121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.729 [2024-12-05 12:22:14.772136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.785188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.785201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.799914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.799932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.812880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.812895] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.825645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.825659] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.839942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.839957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.853019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.853032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.868057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.868072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.881282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.881295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.895888] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.895903] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.908688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.908702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.921378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.921391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.935679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.935693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.948518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.948532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.961684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.961698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.976310] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.976325] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:14.989440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:14.989459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:15.004097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:15.004111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:15.017135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:15.017149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:49.990 [2024-12-05 12:22:15.031829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:49.990 [2024-12-05 12:22:15.031843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.045207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.045221] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.059553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.059572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.072846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.072861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.085529] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.085543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.099621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.099635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.112282] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.112296] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.125497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.125511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.140474] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.140488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.153074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.153088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.167428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.167443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.180630] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.180645] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.193486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.193500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.207708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.207722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.220971] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.220985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.235114] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.235128] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.248042] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.248056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.260829] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.260843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 [2024-12-05 12:22:15.273290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.273304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 19149.20 IOPS, 149.60 MiB/s [2024-12-05T11:22:15.299Z] [2024-12-05 12:22:15.287055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.287070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.250 00:38:50.250 Latency(us) 00:38:50.250 [2024-12-05T11:22:15.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.250 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:38:50.250 Nvme1n1 : 5.01 19152.92 149.63 0.00 0.00 6677.42 2703.36 12069.55 00:38:50.250 [2024-12-05T11:22:15.299Z] =================================================================================================================== 00:38:50.250 [2024-12-05T11:22:15.299Z] Total : 19152.92 149.63 0.00 0.00 6677.42 2703.36 12069.55 00:38:50.250 [2024-12-05 12:22:15.296678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.250 [2024-12-05 12:22:15.296693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.308682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.308695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.320680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.320692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.332680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.332693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.344677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.344688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.356674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.356684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.368674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.368682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.380676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.380685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 [2024-12-05 12:22:15.392674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:38:50.510 [2024-12-05 12:22:15.392682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:38:50.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1624375) - No such process 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1624375 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:50.510 delay0 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:50.510 12:22:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:38:50.771 [2024-12-05 12:22:15.564919] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:38:58.904 Initializing NVMe Controllers 00:38:58.904 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:58.904 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:38:58.904 Initialization complete. Launching workers. 00:38:58.904 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 294, failed: 10459 00:38:58.904 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 10703, failed to submit 50 00:38:58.904 success 10590, unsuccessful 113, failed 0 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:58.904 rmmod nvme_tcp 00:38:58.904 rmmod nvme_fabrics 00:38:58.904 rmmod nvme_keyring 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 1622343 ']' 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 1622343 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1622343 ']' 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1622343 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1622343 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1622343' 00:38:58.904 killing process with pid 1622343 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1622343 00:38:58.904 12:22:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1622343 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:38:58.904 12:22:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:39:00.352 00:39:00.352 real 0m34.703s 00:39:00.352 user 0m43.970s 00:39:00.352 sys 0m12.985s 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:00.352 ************************************ 00:39:00.352 END TEST nvmf_zcopy 00:39:00.352 ************************************ 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:00.352 ************************************ 00:39:00.352 START TEST nvmf_nmic 00:39:00.352 ************************************ 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:00.352 * Looking for test storage... 00:39:00.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:00.352 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:39:00.670 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:00.670 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:00.670 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:00.670 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:00.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.671 --rc genhtml_branch_coverage=1 00:39:00.671 --rc genhtml_function_coverage=1 00:39:00.671 --rc genhtml_legend=1 00:39:00.671 --rc geninfo_all_blocks=1 00:39:00.671 --rc geninfo_unexecuted_blocks=1 00:39:00.671 00:39:00.671 ' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:00.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.671 --rc genhtml_branch_coverage=1 00:39:00.671 --rc genhtml_function_coverage=1 00:39:00.671 --rc genhtml_legend=1 00:39:00.671 --rc geninfo_all_blocks=1 00:39:00.671 --rc geninfo_unexecuted_blocks=1 00:39:00.671 00:39:00.671 ' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:00.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.671 --rc genhtml_branch_coverage=1 00:39:00.671 --rc genhtml_function_coverage=1 00:39:00.671 --rc genhtml_legend=1 00:39:00.671 --rc geninfo_all_blocks=1 00:39:00.671 --rc geninfo_unexecuted_blocks=1 00:39:00.671 00:39:00.671 ' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:00.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:00.671 --rc genhtml_branch_coverage=1 00:39:00.671 --rc genhtml_function_coverage=1 00:39:00.671 --rc genhtml_legend=1 00:39:00.671 --rc geninfo_all_blocks=1 00:39:00.671 --rc geninfo_unexecuted_blocks=1 00:39:00.671 00:39:00.671 ' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:39:00.671 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:39:00.672 12:22:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:08.829 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:08.829 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:08.829 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:08.829 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:08.829 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:08.830 10.0.0.1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:08.830 10.0.0.2 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:08.830 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:08.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:08.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.694 ms 00:39:08.831 00:39:08.831 --- 10.0.0.1 ping statistics --- 00:39:08.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.831 rtt min/avg/max/mdev = 0.694/0.694/0.694/0.000 ms 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:08.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:08.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:39:08.831 00:39:08.831 --- 10.0.0.2 ping statistics --- 00:39:08.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.831 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:08.831 12:22:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:08.831 ' 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:08.831 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=1631063 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 1631063 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1631063 ']' 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.832 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:08.832 [2024-12-05 12:22:33.138638] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:08.832 [2024-12-05 12:22:33.139739] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:08.832 [2024-12-05 12:22:33.139787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.832 [2024-12-05 12:22:33.239553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:08.832 [2024-12-05 12:22:33.294177] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.832 [2024-12-05 12:22:33.294222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.832 [2024-12-05 12:22:33.294234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.832 [2024-12-05 12:22:33.294241] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.832 [2024-12-05 12:22:33.294247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.832 [2024-12-05 12:22:33.296674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.832 [2024-12-05 12:22:33.296835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:08.832 [2024-12-05 12:22:33.296961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.832 [2024-12-05 12:22:33.296962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:08.832 [2024-12-05 12:22:33.376132] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.832 [2024-12-05 12:22:33.376886] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.832 [2024-12-05 12:22:33.377347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:08.832 [2024-12-05 12:22:33.377893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:08.832 [2024-12-05 12:22:33.377927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.092 12:22:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.092 [2024-12-05 12:22:34.005837] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:09.092 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 Malloc0 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 [2024-12-05 12:22:34.102026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:09.093 test case1: single bdev can't be used in multiple subsystems 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.093 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.093 [2024-12-05 12:22:34.137438] bdev.c:8515:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:09.093 [2024-12-05 12:22:34.137472] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:09.093 [2024-12-05 12:22:34.137481] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:09.352 request: 00:39:09.352 { 00:39:09.352 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:09.352 "namespace": { 00:39:09.352 "bdev_name": "Malloc0", 00:39:09.352 "no_auto_visible": false, 00:39:09.352 "hide_metadata": false 00:39:09.352 }, 00:39:09.352 "method": "nvmf_subsystem_add_ns", 00:39:09.352 "req_id": 1 00:39:09.352 } 00:39:09.352 Got JSON-RPC error response 00:39:09.352 response: 00:39:09.352 { 00:39:09.352 "code": -32602, 00:39:09.352 "message": "Invalid parameters" 00:39:09.352 } 00:39:09.352 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:09.353 Adding namespace failed - expected result. 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:09.353 test case2: host connect to nvmf target in multiple paths 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:09.353 [2024-12-05 12:22:34.149593] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.353 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:09.612 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:09.873 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:09.873 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:09.873 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:09.873 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:09.873 12:22:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:39:12.415 12:22:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:12.415 [global] 00:39:12.415 thread=1 00:39:12.415 invalidate=1 00:39:12.415 rw=write 00:39:12.415 time_based=1 00:39:12.415 runtime=1 00:39:12.415 ioengine=libaio 00:39:12.415 direct=1 00:39:12.415 bs=4096 00:39:12.415 iodepth=1 00:39:12.415 norandommap=0 00:39:12.415 numjobs=1 00:39:12.415 00:39:12.415 verify_dump=1 00:39:12.415 verify_backlog=512 00:39:12.415 verify_state_save=0 00:39:12.415 do_verify=1 00:39:12.415 verify=crc32c-intel 00:39:12.415 [job0] 00:39:12.415 filename=/dev/nvme0n1 00:39:12.415 Could not set queue depth (nvme0n1) 00:39:12.415 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:12.415 fio-3.35 00:39:12.415 Starting 1 thread 00:39:13.797 00:39:13.797 job0: (groupid=0, jobs=1): err= 0: pid=1631995: Thu Dec 5 12:22:38 2024 00:39:13.797 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:13.797 slat (nsec): min=7439, max=63294, avg=27296.29, stdev=2720.30 00:39:13.797 clat (usec): min=392, max=1125, avg=961.92, stdev=63.03 00:39:13.797 lat (usec): min=420, max=1152, avg=989.21, stdev=63.14 00:39:13.797 clat percentiles (usec): 00:39:13.797 | 1.00th=[ 775], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 938], 00:39:13.797 | 30.00th=[ 947], 40.00th=[ 955], 50.00th=[ 963], 60.00th=[ 971], 00:39:13.797 | 70.00th=[ 988], 80.00th=[ 1004], 90.00th=[ 1020], 95.00th=[ 1045], 00:39:13.797 | 99.00th=[ 1106], 99.50th=[ 1106], 99.90th=[ 1123], 99.95th=[ 1123], 00:39:13.797 | 99.99th=[ 1123] 00:39:13.797 write: IOPS=725, BW=2901KiB/s (2971kB/s)(2904KiB/1001msec); 0 zone resets 00:39:13.797 slat (usec): min=9, max=30679, avg=73.12, stdev=1137.53 00:39:13.797 clat (usec): min=177, max=831, avg=593.95, stdev=109.18 00:39:13.797 lat (usec): min=188, max=31461, avg=667.06, stdev=1150.14 00:39:13.797 clat percentiles (usec): 00:39:13.797 | 1.00th=[ 306], 5.00th=[ 392], 10.00th=[ 441], 20.00th=[ 494], 00:39:13.797 | 30.00th=[ 562], 40.00th=[ 578], 50.00th=[ 603], 60.00th=[ 644], 00:39:13.797 | 70.00th=[ 668], 80.00th=[ 685], 90.00th=[ 725], 95.00th=[ 742], 00:39:13.797 | 99.00th=[ 791], 99.50th=[ 799], 99.90th=[ 832], 99.95th=[ 832], 00:39:13.797 | 99.99th=[ 832] 00:39:13.797 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:39:13.797 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:13.797 lat (usec) : 250=0.16%, 500=12.28%, 750=44.51%, 1000=34.65% 00:39:13.797 lat (msec) : 2=8.40% 00:39:13.797 cpu : usr=2.90%, sys=4.50%, ctx=1241, majf=0, minf=1 00:39:13.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:13.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:13.797 issued rwts: total=512,726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:13.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:13.797 00:39:13.797 Run status group 0 (all jobs): 00:39:13.797 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:39:13.797 WRITE: bw=2901KiB/s (2971kB/s), 2901KiB/s-2901KiB/s (2971kB/s-2971kB/s), io=2904KiB (2974kB), run=1001-1001msec 00:39:13.797 00:39:13.797 Disk stats (read/write): 00:39:13.797 nvme0n1: ios=538/558, merge=0/0, ticks=1436/264, in_queue=1700, util=98.80% 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:13.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:13.797 rmmod nvme_tcp 00:39:13.797 rmmod nvme_fabrics 00:39:13.797 rmmod nvme_keyring 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 1631063 ']' 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 1631063 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1631063 ']' 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1631063 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1631063 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1631063' 00:39:13.797 killing process with pid 1631063 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1631063 00:39:13.797 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1631063 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:14.057 12:22:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:39:15.966 00:39:15.966 real 0m15.782s 00:39:15.966 user 0m35.515s 00:39:15.966 sys 0m7.267s 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:15.966 12:22:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:15.966 ************************************ 00:39:15.966 END TEST nvmf_nmic 00:39:15.966 ************************************ 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:16.226 ************************************ 00:39:16.226 START TEST nvmf_fio_target 00:39:16.226 ************************************ 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:39:16.226 * Looking for test storage... 00:39:16.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:16.226 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:16.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.227 --rc genhtml_branch_coverage=1 00:39:16.227 --rc genhtml_function_coverage=1 00:39:16.227 --rc genhtml_legend=1 00:39:16.227 --rc geninfo_all_blocks=1 00:39:16.227 --rc geninfo_unexecuted_blocks=1 00:39:16.227 00:39:16.227 ' 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:16.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.227 --rc genhtml_branch_coverage=1 00:39:16.227 --rc genhtml_function_coverage=1 00:39:16.227 --rc genhtml_legend=1 00:39:16.227 --rc geninfo_all_blocks=1 00:39:16.227 --rc geninfo_unexecuted_blocks=1 00:39:16.227 00:39:16.227 ' 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:16.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.227 --rc genhtml_branch_coverage=1 00:39:16.227 --rc genhtml_function_coverage=1 00:39:16.227 --rc genhtml_legend=1 00:39:16.227 --rc geninfo_all_blocks=1 00:39:16.227 --rc geninfo_unexecuted_blocks=1 00:39:16.227 00:39:16.227 ' 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:16.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:16.227 --rc genhtml_branch_coverage=1 00:39:16.227 --rc genhtml_function_coverage=1 00:39:16.227 --rc genhtml_legend=1 00:39:16.227 --rc geninfo_all_blocks=1 00:39:16.227 --rc geninfo_unexecuted_blocks=1 00:39:16.227 00:39:16.227 ' 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:16.227 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:16.488 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:39:16.489 12:22:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.625 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:24.626 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:24.626 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:24.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:24.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:24.626 10.0.0.1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:24.626 10.0.0.2 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:24.626 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:24.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.676 ms 00:39:24.627 00:39:24.627 --- 10.0.0.1 ping statistics --- 00:39:24.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.627 rtt min/avg/max/mdev = 0.676/0.676/0.676/0.000 ms 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:24.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:39:24.627 00:39:24.627 --- 10.0.0.2 ping statistics --- 00:39:24.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.627 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:24.627 ' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.627 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=1636609 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 1636609 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1636609 ']' 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.628 12:22:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:24.628 [2024-12-05 12:22:48.961115] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:24.628 [2024-12-05 12:22:48.962271] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:24.628 [2024-12-05 12:22:48.962322] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:24.628 [2024-12-05 12:22:49.060389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:24.628 [2024-12-05 12:22:49.113116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:24.628 [2024-12-05 12:22:49.113167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:24.628 [2024-12-05 12:22:49.113176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:24.628 [2024-12-05 12:22:49.113183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:24.628 [2024-12-05 12:22:49.113190] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:24.628 [2024-12-05 12:22:49.115266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:24.628 [2024-12-05 12:22:49.115428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:24.628 [2024-12-05 12:22:49.115591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:24.628 [2024-12-05 12:22:49.115592] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:24.628 [2024-12-05 12:22:49.193996] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:24.628 [2024-12-05 12:22:49.194910] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:24.628 [2024-12-05 12:22:49.195295] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:24.628 [2024-12-05 12:22:49.195841] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:24.628 [2024-12-05 12:22:49.195876] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:24.887 12:22:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:39:25.146 [2024-12-05 12:22:49.968447] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:25.146 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.406 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:39:25.406 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.406 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:39:25.406 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.666 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:39:25.666 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:25.927 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:39:25.927 12:22:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:39:26.188 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:26.188 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:39:26.188 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:26.449 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:39:26.449 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:39:26.710 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:39:26.710 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:39:26.971 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:26.971 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:26.971 12:22:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:27.231 12:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:39:27.231 12:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:39:27.492 12:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:27.492 [2024-12-05 12:22:52.540431] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:27.754 12:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:39:27.754 12:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:39:28.015 12:22:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:28.587 12:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:39:28.587 12:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:39:28.587 12:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:28.587 12:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:39:28.587 12:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:39:28.587 12:22:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:39:30.503 12:22:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:39:30.503 [global] 00:39:30.503 thread=1 00:39:30.503 invalidate=1 00:39:30.503 rw=write 00:39:30.503 time_based=1 00:39:30.503 runtime=1 00:39:30.503 ioengine=libaio 00:39:30.503 direct=1 00:39:30.503 bs=4096 00:39:30.503 iodepth=1 00:39:30.503 norandommap=0 00:39:30.503 numjobs=1 00:39:30.503 00:39:30.503 verify_dump=1 00:39:30.503 verify_backlog=512 00:39:30.503 verify_state_save=0 00:39:30.503 do_verify=1 00:39:30.503 verify=crc32c-intel 00:39:30.503 [job0] 00:39:30.503 filename=/dev/nvme0n1 00:39:30.503 [job1] 00:39:30.503 filename=/dev/nvme0n2 00:39:30.503 [job2] 00:39:30.503 filename=/dev/nvme0n3 00:39:30.503 [job3] 00:39:30.503 filename=/dev/nvme0n4 00:39:30.503 Could not set queue depth (nvme0n1) 00:39:30.503 Could not set queue depth (nvme0n2) 00:39:30.503 Could not set queue depth (nvme0n3) 00:39:30.503 Could not set queue depth (nvme0n4) 00:39:30.762 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:30.762 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:30.762 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:30.762 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:30.762 fio-3.35 00:39:30.762 Starting 4 threads 00:39:32.144 00:39:32.144 job0: (groupid=0, jobs=1): err= 0: pid=1637995: Thu Dec 5 12:22:56 2024 00:39:32.144 read: IOPS=561, BW=2246KiB/s (2300kB/s)(2248KiB/1001msec) 00:39:32.144 slat (nsec): min=7790, max=50082, avg=25126.27, stdev=7832.41 00:39:32.144 clat (usec): min=461, max=1007, avg=786.57, stdev=99.04 00:39:32.144 lat (usec): min=493, max=1033, avg=811.70, stdev=101.67 00:39:32.144 clat percentiles (usec): 00:39:32.144 | 1.00th=[ 537], 5.00th=[ 603], 10.00th=[ 652], 20.00th=[ 709], 00:39:32.144 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 816], 00:39:32.144 | 70.00th=[ 848], 80.00th=[ 873], 90.00th=[ 906], 95.00th=[ 930], 00:39:32.144 | 99.00th=[ 963], 99.50th=[ 1004], 99.90th=[ 1012], 99.95th=[ 1012], 00:39:32.144 | 99.99th=[ 1012] 00:39:32.144 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:39:32.144 slat (nsec): min=10003, max=73393, avg=30263.61, stdev=10473.84 00:39:32.144 clat (usec): min=205, max=978, avg=489.07, stdev=117.07 00:39:32.144 lat (usec): min=238, max=989, avg=519.33, stdev=119.69 00:39:32.144 clat percentiles (usec): 00:39:32.144 | 1.00th=[ 260], 5.00th=[ 289], 10.00th=[ 318], 20.00th=[ 392], 00:39:32.144 | 30.00th=[ 437], 40.00th=[ 465], 50.00th=[ 494], 60.00th=[ 523], 00:39:32.144 | 70.00th=[ 545], 80.00th=[ 586], 90.00th=[ 635], 95.00th=[ 685], 00:39:32.144 | 99.00th=[ 783], 99.50th=[ 824], 99.90th=[ 906], 99.95th=[ 979], 00:39:32.144 | 99.99th=[ 979] 00:39:32.144 bw ( KiB/s): min= 4087, max= 4087, per=30.61%, avg=4087.00, stdev= 0.00, samples=1 00:39:32.144 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:39:32.144 lat (usec) : 250=0.57%, 500=33.61%, 750=40.67%, 1000=24.97% 00:39:32.144 lat (msec) : 2=0.19% 00:39:32.144 cpu : usr=3.90%, sys=3.30%, ctx=1587, majf=0, minf=1 00:39:32.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.144 issued rwts: total=562,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.144 job1: (groupid=0, jobs=1): err= 0: pid=1638011: Thu Dec 5 12:22:56 2024 00:39:32.144 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:32.144 slat (nsec): min=25735, max=55189, avg=26801.41, stdev=2352.74 00:39:32.144 clat (usec): min=728, max=1363, avg=1027.55, stdev=101.39 00:39:32.144 lat (usec): min=755, max=1390, avg=1054.35, stdev=101.29 00:39:32.144 clat percentiles (usec): 00:39:32.144 | 1.00th=[ 766], 5.00th=[ 816], 10.00th=[ 889], 20.00th=[ 947], 00:39:32.144 | 30.00th=[ 996], 40.00th=[ 1020], 50.00th=[ 1045], 60.00th=[ 1074], 00:39:32.144 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:39:32.144 | 99.00th=[ 1221], 99.50th=[ 1237], 99.90th=[ 1369], 99.95th=[ 1369], 00:39:32.144 | 99.99th=[ 1369] 00:39:32.144 write: IOPS=728, BW=2913KiB/s (2983kB/s)(2916KiB/1001msec); 0 zone resets 00:39:32.144 slat (nsec): min=9849, max=65591, avg=29391.54, stdev=10895.03 00:39:32.144 clat (usec): min=173, max=1670, avg=587.62, stdev=143.71 00:39:32.144 lat (usec): min=184, max=1715, avg=617.02, stdev=149.96 00:39:32.144 clat percentiles (usec): 00:39:32.144 | 1.00th=[ 265], 5.00th=[ 330], 10.00th=[ 379], 20.00th=[ 478], 00:39:32.144 | 30.00th=[ 519], 40.00th=[ 570], 50.00th=[ 603], 60.00th=[ 635], 00:39:32.144 | 70.00th=[ 676], 80.00th=[ 709], 90.00th=[ 750], 95.00th=[ 791], 00:39:32.144 | 99.00th=[ 881], 99.50th=[ 914], 99.90th=[ 1663], 99.95th=[ 1663], 00:39:32.144 | 99.99th=[ 1663] 00:39:32.144 bw ( KiB/s): min= 4096, max= 4096, per=30.68%, avg=4096.00, stdev= 0.00, samples=1 00:39:32.144 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:32.144 lat (usec) : 250=0.48%, 500=13.78%, 750=39.48%, 1000=17.89% 00:39:32.144 lat (msec) : 2=28.36% 00:39:32.144 cpu : usr=1.80%, sys=3.60%, ctx=1243, majf=0, minf=1 00:39:32.144 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.144 issued rwts: total=512,729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.144 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.144 job2: (groupid=0, jobs=1): err= 0: pid=1638028: Thu Dec 5 12:22:56 2024 00:39:32.144 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:32.144 slat (nsec): min=7868, max=64670, avg=28950.08, stdev=3666.90 00:39:32.144 clat (usec): min=491, max=1260, avg=932.40, stdev=121.94 00:39:32.144 lat (usec): min=520, max=1288, avg=961.35, stdev=121.72 00:39:32.144 clat percentiles (usec): 00:39:32.144 | 1.00th=[ 652], 5.00th=[ 734], 10.00th=[ 783], 20.00th=[ 832], 00:39:32.144 | 30.00th=[ 873], 40.00th=[ 906], 50.00th=[ 938], 60.00th=[ 963], 00:39:32.144 | 70.00th=[ 1004], 80.00th=[ 1045], 90.00th=[ 1090], 95.00th=[ 1123], 00:39:32.144 | 99.00th=[ 1172], 99.50th=[ 1188], 99.90th=[ 1254], 99.95th=[ 1254], 00:39:32.144 | 99.99th=[ 1254] 00:39:32.144 write: IOPS=840, BW=3361KiB/s (3441kB/s)(3364KiB/1001msec); 0 zone resets 00:39:32.144 slat (usec): min=9, max=8685, avg=41.98, stdev=298.63 00:39:32.144 clat (usec): min=140, max=1131, avg=547.66, stdev=152.87 00:39:32.144 lat (usec): min=151, max=9040, avg=589.64, stdev=331.04 00:39:32.144 clat percentiles (usec): 00:39:32.144 | 1.00th=[ 217], 5.00th=[ 293], 10.00th=[ 334], 20.00th=[ 408], 00:39:32.144 | 30.00th=[ 469], 40.00th=[ 515], 50.00th=[ 553], 60.00th=[ 594], 00:39:32.144 | 70.00th=[ 627], 80.00th=[ 668], 90.00th=[ 734], 95.00th=[ 799], 00:39:32.144 | 99.00th=[ 906], 99.50th=[ 947], 99.90th=[ 1139], 99.95th=[ 1139], 00:39:32.144 | 99.99th=[ 1139] 00:39:32.144 bw ( KiB/s): min= 4096, max= 4096, per=30.68%, avg=4096.00, stdev= 0.00, samples=1 00:39:32.144 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:32.145 lat (usec) : 250=1.18%, 500=20.99%, 750=37.25%, 1000=28.75% 00:39:32.145 lat (msec) : 2=11.83% 00:39:32.145 cpu : usr=3.40%, sys=4.90%, ctx=1356, majf=0, minf=1 00:39:32.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.145 issued rwts: total=512,841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.145 job3: (groupid=0, jobs=1): err= 0: pid=1638035: Thu Dec 5 12:22:56 2024 00:39:32.145 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:39:32.145 slat (nsec): min=25039, max=43986, avg=26144.71, stdev=2188.85 00:39:32.145 clat (usec): min=757, max=1300, avg=1002.25, stdev=99.90 00:39:32.145 lat (usec): min=783, max=1325, avg=1028.40, stdev=99.75 00:39:32.145 clat percentiles (usec): 00:39:32.145 | 1.00th=[ 783], 5.00th=[ 832], 10.00th=[ 865], 20.00th=[ 914], 00:39:32.145 | 30.00th=[ 955], 40.00th=[ 988], 50.00th=[ 1012], 60.00th=[ 1029], 00:39:32.145 | 70.00th=[ 1057], 80.00th=[ 1074], 90.00th=[ 1123], 95.00th=[ 1156], 00:39:32.145 | 99.00th=[ 1237], 99.50th=[ 1287], 99.90th=[ 1303], 99.95th=[ 1303], 00:39:32.145 | 99.99th=[ 1303] 00:39:32.145 write: IOPS=746, BW=2985KiB/s (3057kB/s)(2988KiB/1001msec); 0 zone resets 00:39:32.145 slat (nsec): min=9758, max=63773, avg=30787.65, stdev=8554.39 00:39:32.145 clat (usec): min=263, max=975, avg=590.08, stdev=122.20 00:39:32.145 lat (usec): min=273, max=1008, avg=620.87, stdev=124.72 00:39:32.145 clat percentiles (usec): 00:39:32.145 | 1.00th=[ 306], 5.00th=[ 379], 10.00th=[ 437], 20.00th=[ 490], 00:39:32.145 | 30.00th=[ 529], 40.00th=[ 562], 50.00th=[ 586], 60.00th=[ 619], 00:39:32.145 | 70.00th=[ 644], 80.00th=[ 685], 90.00th=[ 742], 95.00th=[ 816], 00:39:32.145 | 99.00th=[ 889], 99.50th=[ 914], 99.90th=[ 979], 99.95th=[ 979], 00:39:32.145 | 99.99th=[ 979] 00:39:32.145 bw ( KiB/s): min= 4096, max= 4096, per=30.68%, avg=4096.00, stdev= 0.00, samples=1 00:39:32.145 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:32.145 lat (usec) : 500=13.26%, 750=40.35%, 1000=24.70% 00:39:32.145 lat (msec) : 2=21.68% 00:39:32.145 cpu : usr=2.40%, sys=3.30%, ctx=1259, majf=0, minf=1 00:39:32.145 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:32.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.145 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:32.145 issued rwts: total=512,747,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:32.145 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:32.145 00:39:32.145 Run status group 0 (all jobs): 00:39:32.145 READ: bw=8384KiB/s (8585kB/s), 2046KiB/s-2246KiB/s (2095kB/s-2300kB/s), io=8392KiB (8593kB), run=1001-1001msec 00:39:32.145 WRITE: bw=13.0MiB/s (13.7MB/s), 2913KiB/s-4092KiB/s (2983kB/s-4190kB/s), io=13.1MiB (13.7MB), run=1001-1001msec 00:39:32.145 00:39:32.145 Disk stats (read/write): 00:39:32.145 nvme0n1: ios=562/781, merge=0/0, ticks=444/372, in_queue=816, util=87.07% 00:39:32.145 nvme0n2: ios=537/512, merge=0/0, ticks=831/286, in_queue=1117, util=96.73% 00:39:32.145 nvme0n3: ios=551/572, merge=0/0, ticks=784/248, in_queue=1032, util=97.78% 00:39:32.145 nvme0n4: ios=515/512, merge=0/0, ticks=694/277, in_queue=971, util=91.65% 00:39:32.145 12:22:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:39:32.145 [global] 00:39:32.145 thread=1 00:39:32.145 invalidate=1 00:39:32.145 rw=randwrite 00:39:32.145 time_based=1 00:39:32.145 runtime=1 00:39:32.145 ioengine=libaio 00:39:32.145 direct=1 00:39:32.145 bs=4096 00:39:32.145 iodepth=1 00:39:32.145 norandommap=0 00:39:32.145 numjobs=1 00:39:32.145 00:39:32.145 verify_dump=1 00:39:32.145 verify_backlog=512 00:39:32.145 verify_state_save=0 00:39:32.145 do_verify=1 00:39:32.145 verify=crc32c-intel 00:39:32.145 [job0] 00:39:32.145 filename=/dev/nvme0n1 00:39:32.145 [job1] 00:39:32.145 filename=/dev/nvme0n2 00:39:32.145 [job2] 00:39:32.145 filename=/dev/nvme0n3 00:39:32.145 [job3] 00:39:32.145 filename=/dev/nvme0n4 00:39:32.145 Could not set queue depth (nvme0n1) 00:39:32.145 Could not set queue depth (nvme0n2) 00:39:32.145 Could not set queue depth (nvme0n3) 00:39:32.145 Could not set queue depth (nvme0n4) 00:39:32.404 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.404 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.404 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.404 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:32.404 fio-3.35 00:39:32.404 Starting 4 threads 00:39:33.818 00:39:33.818 job0: (groupid=0, jobs=1): err= 0: pid=1638416: Thu Dec 5 12:22:58 2024 00:39:33.818 read: IOPS=19, BW=77.0KiB/s (78.8kB/s)(80.0KiB/1039msec) 00:39:33.818 slat (nsec): min=25254, max=26352, avg=25737.60, stdev=281.89 00:39:33.818 clat (usec): min=40833, max=41822, avg=41010.03, stdev=206.62 00:39:33.818 lat (usec): min=40859, max=41848, avg=41035.77, stdev=206.59 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:39:33.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:33.818 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:39:33.818 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:33.818 | 99.99th=[41681] 00:39:33.818 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:39:33.818 slat (nsec): min=9354, max=65425, avg=27812.91, stdev=9606.68 00:39:33.818 clat (usec): min=133, max=978, avg=390.86, stdev=111.21 00:39:33.818 lat (usec): min=165, max=1023, avg=418.67, stdev=113.68 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[ 172], 5.00th=[ 235], 10.00th=[ 269], 20.00th=[ 302], 00:39:33.818 | 30.00th=[ 330], 40.00th=[ 347], 50.00th=[ 371], 60.00th=[ 396], 00:39:33.818 | 70.00th=[ 433], 80.00th=[ 486], 90.00th=[ 562], 95.00th=[ 594], 00:39:33.818 | 99.00th=[ 668], 99.50th=[ 725], 99.90th=[ 979], 99.95th=[ 979], 00:39:33.818 | 99.99th=[ 979] 00:39:33.818 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.818 lat (usec) : 250=6.95%, 500=73.31%, 750=15.79%, 1000=0.19% 00:39:33.818 lat (msec) : 50=3.76% 00:39:33.818 cpu : usr=0.58%, sys=1.54%, ctx=532, majf=0, minf=1 00:39:33.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.818 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.818 job1: (groupid=0, jobs=1): err= 0: pid=1638420: Thu Dec 5 12:22:58 2024 00:39:33.818 read: IOPS=18, BW=75.0KiB/s (76.8kB/s)(76.0KiB/1013msec) 00:39:33.818 slat (nsec): min=10600, max=26788, avg=18509.74, stdev=7749.96 00:39:33.818 clat (usec): min=40841, max=41552, avg=41000.01, stdev=140.08 00:39:33.818 lat (usec): min=40868, max=41579, avg=41018.52, stdev=141.20 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:39:33.818 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:39:33.818 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:39:33.818 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:39:33.818 | 99.99th=[41681] 00:39:33.818 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:39:33.818 slat (usec): min=9, max=288, avg=25.85, stdev=15.77 00:39:33.818 clat (usec): min=103, max=659, avg=423.30, stdev=79.25 00:39:33.818 lat (usec): min=114, max=706, avg=449.15, stdev=84.28 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[ 229], 5.00th=[ 293], 10.00th=[ 330], 20.00th=[ 351], 00:39:33.818 | 30.00th=[ 367], 40.00th=[ 408], 50.00th=[ 441], 60.00th=[ 461], 00:39:33.818 | 70.00th=[ 474], 80.00th=[ 490], 90.00th=[ 510], 95.00th=[ 537], 00:39:33.818 | 99.00th=[ 594], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 660], 00:39:33.818 | 99.99th=[ 660] 00:39:33.818 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.818 lat (usec) : 250=1.51%, 500=82.49%, 750=12.43% 00:39:33.818 lat (msec) : 50=3.58% 00:39:33.818 cpu : usr=0.99%, sys=0.99%, ctx=531, majf=0, minf=1 00:39:33.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.818 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.818 job2: (groupid=0, jobs=1): err= 0: pid=1638432: Thu Dec 5 12:22:58 2024 00:39:33.818 read: IOPS=505, BW=2022KiB/s (2071kB/s)(2024KiB/1001msec) 00:39:33.818 slat (nsec): min=6060, max=45914, avg=26630.16, stdev=4575.73 00:39:33.818 clat (usec): min=646, max=41999, avg=1245.90, stdev=3599.16 00:39:33.818 lat (usec): min=653, max=42024, avg=1272.53, stdev=3599.10 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[ 725], 5.00th=[ 783], 10.00th=[ 832], 20.00th=[ 873], 00:39:33.818 | 30.00th=[ 906], 40.00th=[ 922], 50.00th=[ 938], 60.00th=[ 947], 00:39:33.818 | 70.00th=[ 963], 80.00th=[ 971], 90.00th=[ 996], 95.00th=[ 1029], 00:39:33.818 | 99.00th=[ 1172], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:39:33.818 | 99.99th=[42206] 00:39:33.818 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:39:33.818 slat (nsec): min=9487, max=65957, avg=30410.55, stdev=7729.80 00:39:33.818 clat (usec): min=286, max=1058, avg=648.94, stdev=138.64 00:39:33.818 lat (usec): min=298, max=1090, avg=679.35, stdev=140.32 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[ 326], 5.00th=[ 396], 10.00th=[ 469], 20.00th=[ 537], 00:39:33.818 | 30.00th=[ 586], 40.00th=[ 611], 50.00th=[ 652], 60.00th=[ 685], 00:39:33.818 | 70.00th=[ 717], 80.00th=[ 766], 90.00th=[ 824], 95.00th=[ 873], 00:39:33.818 | 99.00th=[ 955], 99.50th=[ 1029], 99.90th=[ 1057], 99.95th=[ 1057], 00:39:33.818 | 99.99th=[ 1057] 00:39:33.818 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.818 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.818 lat (usec) : 500=7.07%, 750=33.20%, 1000=54.42% 00:39:33.818 lat (msec) : 2=4.81%, 4=0.10%, 50=0.39% 00:39:33.818 cpu : usr=2.60%, sys=2.80%, ctx=1019, majf=0, minf=1 00:39:33.818 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.818 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.818 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.818 issued rwts: total=506,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.818 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.818 job3: (groupid=0, jobs=1): err= 0: pid=1638439: Thu Dec 5 12:22:58 2024 00:39:33.818 read: IOPS=16, BW=66.6KiB/s (68.2kB/s)(68.0KiB/1021msec) 00:39:33.818 slat (nsec): min=26522, max=27251, avg=26899.82, stdev=139.08 00:39:33.818 clat (usec): min=40822, max=42050, avg=41813.57, stdev=366.41 00:39:33.818 lat (usec): min=40848, max=42078, avg=41840.47, stdev=366.42 00:39:33.818 clat percentiles (usec): 00:39:33.818 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:39:33.818 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:39:33.818 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:33.818 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:33.818 | 99.99th=[42206] 00:39:33.818 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:39:33.818 slat (nsec): min=9746, max=55343, avg=32132.91, stdev=8797.46 00:39:33.818 clat (usec): min=118, max=1083, avg=564.03, stdev=145.19 00:39:33.819 lat (usec): min=129, max=1122, avg=596.16, stdev=148.67 00:39:33.819 clat percentiles (usec): 00:39:33.819 | 1.00th=[ 253], 5.00th=[ 310], 10.00th=[ 383], 20.00th=[ 433], 00:39:33.819 | 30.00th=[ 482], 40.00th=[ 529], 50.00th=[ 570], 60.00th=[ 603], 00:39:33.819 | 70.00th=[ 644], 80.00th=[ 693], 90.00th=[ 750], 95.00th=[ 783], 00:39:33.819 | 99.00th=[ 881], 99.50th=[ 971], 99.90th=[ 1090], 99.95th=[ 1090], 00:39:33.819 | 99.99th=[ 1090] 00:39:33.819 bw ( KiB/s): min= 4096, max= 4096, per=51.95%, avg=4096.00, stdev= 0.00, samples=1 00:39:33.819 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:39:33.819 lat (usec) : 250=0.95%, 500=31.00%, 750=55.39%, 1000=9.26% 00:39:33.819 lat (msec) : 2=0.19%, 50=3.21% 00:39:33.819 cpu : usr=0.88%, sys=1.47%, ctx=531, majf=0, minf=1 00:39:33.819 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:33.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:33.819 issued rwts: total=17,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:33.819 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:33.819 00:39:33.819 Run status group 0 (all jobs): 00:39:33.819 READ: bw=2164KiB/s (2216kB/s), 66.6KiB/s-2022KiB/s (68.2kB/s-2071kB/s), io=2248KiB (2302kB), run=1001-1039msec 00:39:33.819 WRITE: bw=7885KiB/s (8074kB/s), 1971KiB/s-2046KiB/s (2018kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1039msec 00:39:33.819 00:39:33.819 Disk stats (read/write): 00:39:33.819 nvme0n1: ios=65/512, merge=0/0, ticks=665/198, in_queue=863, util=87.47% 00:39:33.819 nvme0n2: ios=41/512, merge=0/0, ticks=653/215, in_queue=868, util=86.95% 00:39:33.819 nvme0n3: ios=393/512, merge=0/0, ticks=521/322, in_queue=843, util=91.97% 00:39:33.819 nvme0n4: ios=72/512, merge=0/0, ticks=853/260, in_queue=1113, util=96.90% 00:39:33.819 12:22:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:39:33.819 [global] 00:39:33.819 thread=1 00:39:33.819 invalidate=1 00:39:33.819 rw=write 00:39:33.819 time_based=1 00:39:33.819 runtime=1 00:39:33.819 ioengine=libaio 00:39:33.819 direct=1 00:39:33.819 bs=4096 00:39:33.819 iodepth=128 00:39:33.819 norandommap=0 00:39:33.819 numjobs=1 00:39:33.819 00:39:33.819 verify_dump=1 00:39:33.819 verify_backlog=512 00:39:33.819 verify_state_save=0 00:39:33.819 do_verify=1 00:39:33.819 verify=crc32c-intel 00:39:33.819 [job0] 00:39:33.819 filename=/dev/nvme0n1 00:39:33.819 [job1] 00:39:33.819 filename=/dev/nvme0n2 00:39:33.819 [job2] 00:39:33.819 filename=/dev/nvme0n3 00:39:33.819 [job3] 00:39:33.819 filename=/dev/nvme0n4 00:39:33.819 Could not set queue depth (nvme0n1) 00:39:33.819 Could not set queue depth (nvme0n2) 00:39:33.819 Could not set queue depth (nvme0n3) 00:39:33.819 Could not set queue depth (nvme0n4) 00:39:34.079 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.079 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.079 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.079 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:34.079 fio-3.35 00:39:34.079 Starting 4 threads 00:39:35.462 00:39:35.462 job0: (groupid=0, jobs=1): err= 0: pid=1638922: Thu Dec 5 12:23:00 2024 00:39:35.462 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:39:35.462 slat (nsec): min=945, max=16894k, avg=117201.28, stdev=806331.99 00:39:35.462 clat (usec): min=6033, max=59667, avg=16241.69, stdev=11551.27 00:39:35.462 lat (usec): min=6035, max=59674, avg=16358.89, stdev=11599.31 00:39:35.462 clat percentiles (usec): 00:39:35.462 | 1.00th=[ 7504], 5.00th=[ 7963], 10.00th=[ 8029], 20.00th=[ 8356], 00:39:35.462 | 30.00th=[ 8979], 40.00th=[ 9765], 50.00th=[11076], 60.00th=[13042], 00:39:35.462 | 70.00th=[14484], 80.00th=[25297], 90.00th=[35914], 95.00th=[42206], 00:39:35.462 | 99.00th=[58983], 99.50th=[59507], 99.90th=[59507], 99.95th=[59507], 00:39:35.462 | 99.99th=[59507] 00:39:35.462 write: IOPS=4667, BW=18.2MiB/s (19.1MB/s)(18.4MiB/1008msec); 0 zone resets 00:39:35.462 slat (nsec): min=1634, max=22045k, avg=94539.20, stdev=651890.27 00:39:35.462 clat (usec): min=301, max=47418, avg=11199.02, stdev=6893.16 00:39:35.462 lat (usec): min=4474, max=59368, avg=11293.55, stdev=6950.41 00:39:35.462 clat percentiles (usec): 00:39:35.462 | 1.00th=[ 5932], 5.00th=[ 7111], 10.00th=[ 7635], 20.00th=[ 7963], 00:39:35.462 | 30.00th=[ 8225], 40.00th=[ 8455], 50.00th=[ 8717], 60.00th=[ 8848], 00:39:35.462 | 70.00th=[ 9896], 80.00th=[11600], 90.00th=[20579], 95.00th=[27132], 00:39:35.462 | 99.00th=[43779], 99.50th=[46924], 99.90th=[47449], 99.95th=[47449], 00:39:35.462 | 99.99th=[47449] 00:39:35.462 bw ( KiB/s): min=12288, max=24576, per=19.27%, avg=18432.00, stdev=8688.93, samples=2 00:39:35.462 iops : min= 3072, max= 6144, avg=4608.00, stdev=2172.23, samples=2 00:39:35.462 lat (usec) : 500=0.01% 00:39:35.462 lat (msec) : 10=57.11%, 20=26.23%, 50=15.98%, 100=0.67% 00:39:35.462 cpu : usr=2.78%, sys=4.17%, ctx=355, majf=0, minf=1 00:39:35.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:39:35.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.462 issued rwts: total=4608,4705,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.462 job1: (groupid=0, jobs=1): err= 0: pid=1638923: Thu Dec 5 12:23:00 2024 00:39:35.462 read: IOPS=5608, BW=21.9MiB/s (23.0MB/s)(22.1MiB/1008msec) 00:39:35.462 slat (nsec): min=940, max=16796k, avg=86495.94, stdev=728616.73 00:39:35.462 clat (usec): min=1396, max=59859, avg=11388.77, stdev=8247.90 00:39:35.462 lat (usec): min=1405, max=59867, avg=11475.27, stdev=8303.13 00:39:35.462 clat percentiles (usec): 00:39:35.462 | 1.00th=[ 2540], 5.00th=[ 4948], 10.00th=[ 5866], 20.00th=[ 6587], 00:39:35.462 | 30.00th=[ 7373], 40.00th=[ 8094], 50.00th=[ 8848], 60.00th=[ 9765], 00:39:35.462 | 70.00th=[12256], 80.00th=[14615], 90.00th=[17695], 95.00th=[23462], 00:39:35.462 | 99.00th=[56361], 99.50th=[56886], 99.90th=[60031], 99.95th=[60031], 00:39:35.462 | 99.99th=[60031] 00:39:35.462 write: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec); 0 zone resets 00:39:35.462 slat (nsec): min=1700, max=14080k, avg=75370.24, stdev=655778.08 00:39:35.462 clat (usec): min=1358, max=53352, avg=10188.38, stdev=7351.08 00:39:35.462 lat (usec): min=1368, max=53361, avg=10263.75, stdev=7389.04 00:39:35.462 clat percentiles (usec): 00:39:35.462 | 1.00th=[ 2180], 5.00th=[ 3982], 10.00th=[ 4293], 20.00th=[ 5800], 00:39:35.462 | 30.00th=[ 6325], 40.00th=[ 7111], 50.00th=[ 7898], 60.00th=[ 8979], 00:39:35.462 | 70.00th=[10683], 80.00th=[12780], 90.00th=[17957], 95.00th=[26608], 00:39:35.462 | 99.00th=[41157], 99.50th=[44303], 99.90th=[53216], 99.95th=[53216], 00:39:35.462 | 99.99th=[53216] 00:39:35.462 bw ( KiB/s): min=18068, max=30264, per=25.26%, avg=24166.00, stdev=8623.87, samples=2 00:39:35.462 iops : min= 4517, max= 7566, avg=6041.50, stdev=2155.97, samples=2 00:39:35.462 lat (msec) : 2=0.56%, 4=3.40%, 10=58.85%, 20=30.13%, 50=6.01% 00:39:35.462 lat (msec) : 100=1.05% 00:39:35.462 cpu : usr=4.67%, sys=6.45%, ctx=260, majf=0, minf=1 00:39:35.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:35.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.463 issued rwts: total=5653,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.463 job2: (groupid=0, jobs=1): err= 0: pid=1638932: Thu Dec 5 12:23:00 2024 00:39:35.463 read: IOPS=6458, BW=25.2MiB/s (26.5MB/s)(25.3MiB/1003msec) 00:39:35.463 slat (nsec): min=1020, max=14010k, avg=75768.49, stdev=615305.74 00:39:35.463 clat (usec): min=1986, max=24209, avg=9785.82, stdev=3126.07 00:39:35.463 lat (usec): min=1988, max=28110, avg=9861.58, stdev=3164.33 00:39:35.463 clat percentiles (usec): 00:39:35.463 | 1.00th=[ 4359], 5.00th=[ 5866], 10.00th=[ 6325], 20.00th=[ 7439], 00:39:35.463 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9765], 00:39:35.463 | 70.00th=[10683], 80.00th=[11994], 90.00th=[14222], 95.00th=[15401], 00:39:35.463 | 99.00th=[19792], 99.50th=[20579], 99.90th=[24249], 99.95th=[24249], 00:39:35.463 | 99.99th=[24249] 00:39:35.463 write: IOPS=6636, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1003msec); 0 zone resets 00:39:35.463 slat (nsec): min=1767, max=7523.1k, avg=70721.83, stdev=485699.17 00:39:35.463 clat (usec): min=1196, max=46543, avg=9479.12, stdev=6615.48 00:39:35.463 lat (usec): min=1206, max=46556, avg=9549.84, stdev=6659.03 00:39:35.463 clat percentiles (usec): 00:39:35.463 | 1.00th=[ 3621], 5.00th=[ 4490], 10.00th=[ 4817], 20.00th=[ 5800], 00:39:35.463 | 30.00th=[ 6718], 40.00th=[ 7570], 50.00th=[ 8029], 60.00th=[ 8717], 00:39:35.463 | 70.00th=[ 9372], 80.00th=[11076], 90.00th=[13435], 95.00th=[15401], 00:39:35.463 | 99.00th=[43254], 99.50th=[45876], 99.90th=[46400], 99.95th=[46400], 00:39:35.463 | 99.99th=[46400] 00:39:35.463 bw ( KiB/s): min=24576, max=28729, per=27.86%, avg=26652.50, stdev=2936.61, samples=2 00:39:35.463 iops : min= 6144, max= 7182, avg=6663.00, stdev=733.98, samples=2 00:39:35.463 lat (msec) : 2=0.20%, 4=1.13%, 10=68.45%, 20=27.74%, 50=2.48% 00:39:35.463 cpu : usr=5.39%, sys=6.39%, ctx=405, majf=0, minf=1 00:39:35.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:39:35.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.463 issued rwts: total=6478,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.463 job3: (groupid=0, jobs=1): err= 0: pid=1638936: Thu Dec 5 12:23:00 2024 00:39:35.463 read: IOPS=6095, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1008msec) 00:39:35.463 slat (nsec): min=981, max=10885k, avg=77552.61, stdev=581124.34 00:39:35.463 clat (usec): min=2377, max=28219, avg=9944.95, stdev=3596.58 00:39:35.463 lat (usec): min=2380, max=28222, avg=10022.50, stdev=3629.40 00:39:35.463 clat percentiles (usec): 00:39:35.463 | 1.00th=[ 4621], 5.00th=[ 6325], 10.00th=[ 6849], 20.00th=[ 7177], 00:39:35.463 | 30.00th=[ 7898], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9765], 00:39:35.463 | 70.00th=[10814], 80.00th=[12387], 90.00th=[14091], 95.00th=[16909], 00:39:35.463 | 99.00th=[22414], 99.50th=[24773], 99.90th=[26346], 99.95th=[28181], 00:39:35.463 | 99.99th=[28181] 00:39:35.463 write: IOPS=6550, BW=25.6MiB/s (26.8MB/s)(25.8MiB/1008msec); 0 zone resets 00:39:35.463 slat (nsec): min=1670, max=11541k, avg=74404.58, stdev=445335.65 00:39:35.463 clat (usec): min=1121, max=32811, avg=10111.75, stdev=5062.05 00:39:35.463 lat (usec): min=1133, max=32816, avg=10186.15, stdev=5095.71 00:39:35.463 clat percentiles (usec): 00:39:35.463 | 1.00th=[ 2089], 5.00th=[ 4424], 10.00th=[ 5276], 20.00th=[ 6718], 00:39:35.463 | 30.00th=[ 7439], 40.00th=[ 7767], 50.00th=[ 8094], 60.00th=[ 9503], 00:39:35.463 | 70.00th=[12125], 80.00th=[13435], 90.00th=[16909], 95.00th=[20055], 00:39:35.463 | 99.00th=[31065], 99.50th=[31851], 99.90th=[32375], 99.95th=[32375], 00:39:35.463 | 99.99th=[32900] 00:39:35.463 bw ( KiB/s): min=22448, max=29410, per=27.10%, avg=25929.00, stdev=4922.88, samples=2 00:39:35.463 iops : min= 5612, max= 7352, avg=6482.00, stdev=1230.37, samples=2 00:39:35.463 lat (msec) : 2=0.45%, 4=1.80%, 10=59.45%, 20=34.36%, 50=3.94% 00:39:35.463 cpu : usr=2.88%, sys=6.75%, ctx=688, majf=0, minf=2 00:39:35.463 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:39:35.463 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:35.463 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:35.463 issued rwts: total=6144,6603,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:35.463 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:35.463 00:39:35.463 Run status group 0 (all jobs): 00:39:35.463 READ: bw=88.7MiB/s (93.0MB/s), 17.9MiB/s-25.2MiB/s (18.7MB/s-26.5MB/s), io=89.4MiB (93.7MB), run=1003-1008msec 00:39:35.463 WRITE: bw=93.4MiB/s (98.0MB/s), 18.2MiB/s-25.9MiB/s (19.1MB/s-27.2MB/s), io=94.2MiB (98.7MB), run=1003-1008msec 00:39:35.463 00:39:35.463 Disk stats (read/write): 00:39:35.463 nvme0n1: ios=4177/4608, merge=0/0, ticks=14744/13911, in_queue=28655, util=87.98% 00:39:35.463 nvme0n2: ios=4659/5120, merge=0/0, ticks=35115/35474, in_queue=70589, util=95.38% 00:39:35.463 nvme0n3: ios=5153/5189, merge=0/0, ticks=50160/50501, in_queue=100661, util=95.53% 00:39:35.463 nvme0n4: ios=5151/5503, merge=0/0, ticks=47091/53734, in_queue=100825, util=91.15% 00:39:35.463 12:23:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:39:35.463 [global] 00:39:35.463 thread=1 00:39:35.463 invalidate=1 00:39:35.463 rw=randwrite 00:39:35.463 time_based=1 00:39:35.463 runtime=1 00:39:35.463 ioengine=libaio 00:39:35.463 direct=1 00:39:35.463 bs=4096 00:39:35.463 iodepth=128 00:39:35.463 norandommap=0 00:39:35.463 numjobs=1 00:39:35.463 00:39:35.463 verify_dump=1 00:39:35.463 verify_backlog=512 00:39:35.463 verify_state_save=0 00:39:35.463 do_verify=1 00:39:35.463 verify=crc32c-intel 00:39:35.463 [job0] 00:39:35.463 filename=/dev/nvme0n1 00:39:35.463 [job1] 00:39:35.463 filename=/dev/nvme0n2 00:39:35.463 [job2] 00:39:35.463 filename=/dev/nvme0n3 00:39:35.463 [job3] 00:39:35.463 filename=/dev/nvme0n4 00:39:35.463 Could not set queue depth (nvme0n1) 00:39:35.463 Could not set queue depth (nvme0n2) 00:39:35.463 Could not set queue depth (nvme0n3) 00:39:35.463 Could not set queue depth (nvme0n4) 00:39:35.724 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.724 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.724 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.724 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:39:35.724 fio-3.35 00:39:35.724 Starting 4 threads 00:39:37.111 00:39:37.111 job0: (groupid=0, jobs=1): err= 0: pid=1639445: Thu Dec 5 12:23:02 2024 00:39:37.111 read: IOPS=7626, BW=29.8MiB/s (31.2MB/s)(29.9MiB/1003msec) 00:39:37.111 slat (nsec): min=954, max=6346.0k, avg=64184.50, stdev=357022.91 00:39:37.111 clat (usec): min=1117, max=13761, avg=8585.97, stdev=1658.58 00:39:37.111 lat (usec): min=1818, max=15357, avg=8650.15, stdev=1654.19 00:39:37.111 clat percentiles (usec): 00:39:37.111 | 1.00th=[ 4015], 5.00th=[ 5604], 10.00th=[ 6194], 20.00th=[ 7046], 00:39:37.111 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9110], 60.00th=[ 9241], 00:39:37.111 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10421], 95.00th=[10814], 00:39:37.111 | 99.00th=[11469], 99.50th=[13173], 99.90th=[13304], 99.95th=[13304], 00:39:37.111 | 99.99th=[13698] 00:39:37.111 write: IOPS=7657, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1003msec); 0 zone resets 00:39:37.111 slat (nsec): min=1557, max=5374.4k, avg=62609.42, stdev=383541.41 00:39:37.111 clat (usec): min=772, max=16286, avg=7995.57, stdev=2028.82 00:39:37.111 lat (usec): min=784, max=16295, avg=8058.18, stdev=2022.88 00:39:37.111 clat percentiles (usec): 00:39:37.111 | 1.00th=[ 3621], 5.00th=[ 4146], 10.00th=[ 5669], 20.00th=[ 6325], 00:39:37.111 | 30.00th=[ 6718], 40.00th=[ 8225], 50.00th=[ 8455], 60.00th=[ 8717], 00:39:37.111 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9503], 95.00th=[10159], 00:39:37.111 | 99.00th=[14484], 99.50th=[16057], 99.90th=[16319], 99.95th=[16319], 00:39:37.111 | 99.99th=[16319] 00:39:37.111 bw ( KiB/s): min=28672, max=32768, per=32.63%, avg=30720.00, stdev=2896.31, samples=2 00:39:37.111 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:39:37.111 lat (usec) : 1000=0.02% 00:39:37.111 lat (msec) : 2=0.12%, 4=2.42%, 10=87.68%, 20=9.75% 00:39:37.111 cpu : usr=2.69%, sys=5.69%, ctx=553, majf=0, minf=1 00:39:37.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:37.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.111 issued rwts: total=7649,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.111 job1: (groupid=0, jobs=1): err= 0: pid=1639446: Thu Dec 5 12:23:02 2024 00:39:37.111 read: IOPS=5067, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1006msec) 00:39:37.111 slat (nsec): min=940, max=9150.2k, avg=100320.01, stdev=599018.06 00:39:37.111 clat (usec): min=1060, max=35449, avg=12725.36, stdev=3825.62 00:39:37.111 lat (usec): min=5352, max=35456, avg=12825.68, stdev=3867.28 00:39:37.111 clat percentiles (usec): 00:39:37.111 | 1.00th=[ 6194], 5.00th=[ 7504], 10.00th=[ 7832], 20.00th=[ 9765], 00:39:37.111 | 30.00th=[11338], 40.00th=[12125], 50.00th=[12387], 60.00th=[12911], 00:39:37.111 | 70.00th=[13960], 80.00th=[15270], 90.00th=[16450], 95.00th=[19530], 00:39:37.111 | 99.00th=[26346], 99.50th=[28967], 99.90th=[32113], 99.95th=[35390], 00:39:37.111 | 99.99th=[35390] 00:39:37.111 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:39:37.111 slat (nsec): min=1658, max=40761k, avg=90955.59, stdev=692346.16 00:39:37.111 clat (usec): min=3838, max=51818, avg=12185.10, stdev=6637.44 00:39:37.111 lat (usec): min=3932, max=52008, avg=12276.06, stdev=6659.74 00:39:37.111 clat percentiles (usec): 00:39:37.111 | 1.00th=[ 5276], 5.00th=[ 6456], 10.00th=[ 7767], 20.00th=[ 8356], 00:39:37.111 | 30.00th=[ 9110], 40.00th=[10421], 50.00th=[11338], 60.00th=[12387], 00:39:37.111 | 70.00th=[13566], 80.00th=[14222], 90.00th=[14877], 95.00th=[16712], 00:39:37.111 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51643], 99.95th=[51643], 00:39:37.111 | 99.99th=[51643] 00:39:37.111 bw ( KiB/s): min=20480, max=20480, per=21.75%, avg=20480.00, stdev= 0.00, samples=2 00:39:37.111 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:39:37.111 lat (msec) : 2=0.01%, 4=0.03%, 10=30.10%, 20=66.44%, 50=2.52% 00:39:37.111 lat (msec) : 100=0.90% 00:39:37.111 cpu : usr=2.39%, sys=5.67%, ctx=547, majf=0, minf=2 00:39:37.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:39:37.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.111 issued rwts: total=5098,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.111 job2: (groupid=0, jobs=1): err= 0: pid=1639447: Thu Dec 5 12:23:02 2024 00:39:37.111 read: IOPS=3708, BW=14.5MiB/s (15.2MB/s)(15.1MiB/1044msec) 00:39:37.111 slat (nsec): min=922, max=46456k, avg=143476.07, stdev=1206472.81 00:39:37.111 clat (usec): min=8282, max=62282, avg=18919.85, stdev=10140.68 00:39:37.111 lat (usec): min=10017, max=62310, avg=19063.32, stdev=10163.95 00:39:37.111 clat percentiles (usec): 00:39:37.111 | 1.00th=[11338], 5.00th=[12387], 10.00th=[13304], 20.00th=[14484], 00:39:37.111 | 30.00th=[15008], 40.00th=[15533], 50.00th=[15926], 60.00th=[16450], 00:39:37.111 | 70.00th=[17433], 80.00th=[19006], 90.00th=[21365], 95.00th=[53740], 00:39:37.111 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[60556], 00:39:37.111 | 99.99th=[62129] 00:39:37.111 write: IOPS=3923, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1044msec); 0 zone resets 00:39:37.111 slat (nsec): min=1528, max=5878.2k, avg=103645.51, stdev=492832.25 00:39:37.111 clat (usec): min=7247, max=60476, avg=14377.25, stdev=6470.47 00:39:37.111 lat (usec): min=7258, max=60487, avg=14480.90, stdev=6465.36 00:39:37.111 clat percentiles (usec): 00:39:37.112 | 1.00th=[ 7898], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[11469], 00:39:37.112 | 30.00th=[12387], 40.00th=[13304], 50.00th=[13960], 60.00th=[14222], 00:39:37.112 | 70.00th=[15008], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:39:37.112 | 99.00th=[56886], 99.50th=[56886], 99.90th=[57410], 99.95th=[60031], 00:39:37.112 | 99.99th=[60556] 00:39:37.112 bw ( KiB/s): min=15928, max=16840, per=17.40%, avg=16384.00, stdev=644.88, samples=2 00:39:37.112 iops : min= 3982, max= 4210, avg=4096.00, stdev=161.22, samples=2 00:39:37.112 lat (msec) : 10=4.98%, 20=86.56%, 50=4.49%, 100=3.97% 00:39:37.112 cpu : usr=2.88%, sys=3.93%, ctx=436, majf=0, minf=1 00:39:37.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:39:37.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.112 issued rwts: total=3872,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.112 job3: (groupid=0, jobs=1): err= 0: pid=1639448: Thu Dec 5 12:23:02 2024 00:39:37.112 read: IOPS=7188, BW=28.1MiB/s (29.4MB/s)(28.2MiB/1006msec) 00:39:37.112 slat (nsec): min=969, max=5862.4k, avg=66154.71, stdev=423529.55 00:39:37.112 clat (usec): min=1858, max=20017, avg=8589.74, stdev=1529.24 00:39:37.112 lat (usec): min=4907, max=20018, avg=8655.89, stdev=1564.51 00:39:37.112 clat percentiles (usec): 00:39:37.112 | 1.00th=[ 5866], 5.00th=[ 6587], 10.00th=[ 7046], 20.00th=[ 7701], 00:39:37.112 | 30.00th=[ 8029], 40.00th=[ 8160], 50.00th=[ 8291], 60.00th=[ 8455], 00:39:37.112 | 70.00th=[ 8848], 80.00th=[ 9372], 90.00th=[10421], 95.00th=[11207], 00:39:37.112 | 99.00th=[15664], 99.50th=[15926], 99.90th=[20055], 99.95th=[20055], 00:39:37.112 | 99.99th=[20055] 00:39:37.112 write: IOPS=7634, BW=29.8MiB/s (31.3MB/s)(30.0MiB/1006msec); 0 zone resets 00:39:37.112 slat (nsec): min=1605, max=5134.0k, avg=63488.24, stdev=400091.41 00:39:37.112 clat (usec): min=3728, max=19575, avg=8463.37, stdev=1884.73 00:39:37.112 lat (usec): min=3737, max=19576, avg=8526.86, stdev=1906.57 00:39:37.112 clat percentiles (usec): 00:39:37.112 | 1.00th=[ 4555], 5.00th=[ 5669], 10.00th=[ 7111], 20.00th=[ 7570], 00:39:37.112 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8291], 00:39:37.112 | 70.00th=[ 8586], 80.00th=[ 9241], 90.00th=[10421], 95.00th=[12518], 00:39:37.112 | 99.00th=[15533], 99.50th=[15664], 99.90th=[18744], 99.95th=[19530], 00:39:37.112 | 99.99th=[19530] 00:39:37.112 bw ( KiB/s): min=28672, max=32256, per=32.35%, avg=30464.00, stdev=2534.27, samples=2 00:39:37.112 iops : min= 7168, max= 8064, avg=7616.00, stdev=633.57, samples=2 00:39:37.112 lat (msec) : 2=0.01%, 4=0.25%, 10=87.18%, 20=12.53%, 50=0.02% 00:39:37.112 cpu : usr=3.68%, sys=8.66%, ctx=563, majf=0, minf=1 00:39:37.112 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:39:37.112 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:37.112 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:39:37.112 issued rwts: total=7232,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:37.112 latency : target=0, window=0, percentile=100.00%, depth=128 00:39:37.112 00:39:37.112 Run status group 0 (all jobs): 00:39:37.112 READ: bw=89.2MiB/s (93.6MB/s), 14.5MiB/s-29.8MiB/s (15.2MB/s-31.2MB/s), io=93.2MiB (97.7MB), run=1003-1044msec 00:39:37.112 WRITE: bw=92.0MiB/s (96.4MB/s), 15.3MiB/s-29.9MiB/s (16.1MB/s-31.4MB/s), io=96.0MiB (101MB), run=1003-1044msec 00:39:37.112 00:39:37.112 Disk stats (read/write): 00:39:37.112 nvme0n1: ios=6390/6656, merge=0/0, ticks=25828/23433, in_queue=49261, util=84.57% 00:39:37.112 nvme0n2: ios=4144/4279, merge=0/0, ticks=17477/15105, in_queue=32582, util=90.93% 00:39:37.112 nvme0n3: ios=3129/3383, merge=0/0, ticks=15758/11936, in_queue=27694, util=95.46% 00:39:37.112 nvme0n4: ios=6201/6277, merge=0/0, ticks=25907/23877, in_queue=49784, util=94.98% 00:39:37.112 12:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:39:37.112 12:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1639781 00:39:37.112 12:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:39:37.112 12:23:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:39:37.112 [global] 00:39:37.112 thread=1 00:39:37.112 invalidate=1 00:39:37.112 rw=read 00:39:37.112 time_based=1 00:39:37.112 runtime=10 00:39:37.112 ioengine=libaio 00:39:37.112 direct=1 00:39:37.112 bs=4096 00:39:37.112 iodepth=1 00:39:37.112 norandommap=1 00:39:37.112 numjobs=1 00:39:37.112 00:39:37.112 [job0] 00:39:37.112 filename=/dev/nvme0n1 00:39:37.112 [job1] 00:39:37.112 filename=/dev/nvme0n2 00:39:37.112 [job2] 00:39:37.112 filename=/dev/nvme0n3 00:39:37.112 [job3] 00:39:37.112 filename=/dev/nvme0n4 00:39:37.112 Could not set queue depth (nvme0n1) 00:39:37.112 Could not set queue depth (nvme0n2) 00:39:37.112 Could not set queue depth (nvme0n3) 00:39:37.112 Could not set queue depth (nvme0n4) 00:39:37.682 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.682 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.682 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.682 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:39:37.682 fio-3.35 00:39:37.682 Starting 4 threads 00:39:40.286 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:39:40.286 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=3284992, buflen=4096 00:39:40.286 fio: pid=1639971, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:40.286 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:39:40.545 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=10244096, buflen=4096 00:39:40.545 fio: pid=1639970, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:40.545 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:40.545 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:39:40.545 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:40.545 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:39:40.545 fio: io_u error on file /dev/nvme0n1: Input/output error: read offset=294912, buflen=4096 00:39:40.545 fio: pid=1639968, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:39:40.805 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=9814016, buflen=4096 00:39:40.805 fio: pid=1639969, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:39:40.805 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:40.805 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:39:40.805 00:39:40.805 job0: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=1639968: Thu Dec 5 12:23:05 2024 00:39:40.805 read: IOPS=24, BW=96.6KiB/s (99.0kB/s)(288KiB/2980msec) 00:39:40.805 slat (usec): min=25, max=7635, avg=195.11, stdev=1035.88 00:39:40.805 clat (usec): min=1015, max=42320, avg=41176.48, stdev=4816.42 00:39:40.805 lat (usec): min=1087, max=49045, avg=41373.82, stdev=4930.29 00:39:40.805 clat percentiles (usec): 00:39:40.805 | 1.00th=[ 1012], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:39:40.805 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:39:40.805 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:39:40.805 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:40.805 | 99.99th=[42206] 00:39:40.805 bw ( KiB/s): min= 96, max= 96, per=1.29%, avg=96.00, stdev= 0.00, samples=5 00:39:40.805 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:39:40.805 lat (msec) : 2=1.37%, 50=97.26% 00:39:40.805 cpu : usr=0.17%, sys=0.00%, ctx=75, majf=0, minf=1 00:39:40.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.805 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.805 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:40.805 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1639969: Thu Dec 5 12:23:05 2024 00:39:40.805 read: IOPS=770, BW=3081KiB/s (3155kB/s)(9584KiB/3111msec) 00:39:40.805 slat (usec): min=6, max=17673, avg=58.98, stdev=665.57 00:39:40.805 clat (usec): min=580, max=41879, avg=1232.77, stdev=2317.30 00:39:40.805 lat (usec): min=593, max=41906, avg=1291.76, stdev=2408.69 00:39:40.805 clat percentiles (usec): 00:39:40.805 | 1.00th=[ 750], 5.00th=[ 906], 10.00th=[ 971], 20.00th=[ 1037], 00:39:40.805 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:39:40.805 | 70.00th=[ 1156], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1254], 00:39:40.805 | 99.00th=[ 1336], 99.50th=[ 1663], 99.90th=[41157], 99.95th=[41157], 00:39:40.805 | 99.99th=[41681] 00:39:40.805 bw ( KiB/s): min= 1272, max= 3560, per=41.47%, avg=3077.17, stdev=893.88, samples=6 00:39:40.805 iops : min= 318, max= 890, avg=769.17, stdev=223.45, samples=6 00:39:40.805 lat (usec) : 750=0.96%, 1000=12.31% 00:39:40.805 lat (msec) : 2=86.32%, 4=0.04%, 50=0.33% 00:39:40.805 cpu : usr=1.32%, sys=3.18%, ctx=2404, majf=0, minf=2 00:39:40.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.805 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.805 issued rwts: total=2397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.805 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:40.805 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1639970: Thu Dec 5 12:23:05 2024 00:39:40.805 read: IOPS=905, BW=3619KiB/s (3706kB/s)(9.77MiB/2764msec) 00:39:40.805 slat (usec): min=6, max=18316, avg=41.72, stdev=472.21 00:39:40.805 clat (usec): min=453, max=1394, avg=1056.40, stdev=135.13 00:39:40.805 lat (usec): min=463, max=19317, avg=1098.13, stdev=487.53 00:39:40.805 clat percentiles (usec): 00:39:40.805 | 1.00th=[ 668], 5.00th=[ 807], 10.00th=[ 881], 20.00th=[ 947], 00:39:40.805 | 30.00th=[ 996], 40.00th=[ 1045], 50.00th=[ 1090], 60.00th=[ 1106], 00:39:40.805 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1205], 95.00th=[ 1237], 00:39:40.805 | 99.00th=[ 1303], 99.50th=[ 1319], 99.90th=[ 1352], 99.95th=[ 1385], 00:39:40.805 | 99.99th=[ 1401] 00:39:40.805 bw ( KiB/s): min= 3456, max= 4032, per=48.53%, avg=3601.60, stdev=247.25, samples=5 00:39:40.805 iops : min= 864, max= 1008, avg=900.40, stdev=61.81, samples=5 00:39:40.805 lat (usec) : 500=0.16%, 750=2.36%, 1000=29.06% 00:39:40.805 lat (msec) : 2=68.39% 00:39:40.805 cpu : usr=1.48%, sys=3.91%, ctx=2506, majf=0, minf=2 00:39:40.805 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.806 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.806 issued rwts: total=2502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:40.806 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1639971: Thu Dec 5 12:23:05 2024 00:39:40.806 read: IOPS=311, BW=1243KiB/s (1273kB/s)(3208KiB/2581msec) 00:39:40.806 slat (nsec): min=23625, max=59957, avg=26304.36, stdev=3237.24 00:39:40.806 clat (usec): min=853, max=42048, avg=3183.45, stdev=8696.00 00:39:40.806 lat (usec): min=879, max=42074, avg=3209.75, stdev=8695.91 00:39:40.806 clat percentiles (usec): 00:39:40.806 | 1.00th=[ 996], 5.00th=[ 1074], 10.00th=[ 1123], 20.00th=[ 1156], 00:39:40.806 | 30.00th=[ 1188], 40.00th=[ 1205], 50.00th=[ 1237], 60.00th=[ 1254], 00:39:40.806 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1336], 95.00th=[ 1450], 00:39:40.806 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:39:40.806 | 99.99th=[42206] 00:39:40.806 bw ( KiB/s): min= 96, max= 1896, per=16.54%, avg=1227.20, stdev=750.48, samples=5 00:39:40.806 iops : min= 24, max= 474, avg=306.80, stdev=187.62, samples=5 00:39:40.806 lat (usec) : 1000=1.49% 00:39:40.806 lat (msec) : 2=93.52%, 50=4.86% 00:39:40.806 cpu : usr=0.23%, sys=1.05%, ctx=803, majf=0, minf=2 00:39:40.806 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:39:40.806 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.806 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:39:40.806 issued rwts: total=803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:39:40.806 latency : target=0, window=0, percentile=100.00%, depth=1 00:39:40.806 00:39:40.806 Run status group 0 (all jobs): 00:39:40.806 READ: bw=7420KiB/s (7598kB/s), 96.6KiB/s-3619KiB/s (99.0kB/s-3706kB/s), io=22.5MiB (23.6MB), run=2581-3111msec 00:39:40.806 00:39:40.806 Disk stats (read/write): 00:39:40.806 nvme0n1: ios=68/0, merge=0/0, ticks=2801/0, in_queue=2801, util=94.62% 00:39:40.806 nvme0n2: ios=2375/0, merge=0/0, ticks=2677/0, in_queue=2677, util=93.43% 00:39:40.806 nvme0n3: ios=2373/0, merge=0/0, ticks=3220/0, in_queue=3220, util=99.67% 00:39:40.806 nvme0n4: ios=704/0, merge=0/0, ticks=2295/0, in_queue=2295, util=96.06% 00:39:41.066 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:41.066 12:23:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:39:41.066 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:41.066 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:39:41.327 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:41.327 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:39:41.588 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:39:41.588 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1639781 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:39:41.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:39:41.849 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:39:41.849 nvmf hotplug test: fio failed as expected 00:39:41.850 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:42.110 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:39:42.110 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:42.111 12:23:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:42.111 rmmod nvme_tcp 00:39:42.111 rmmod nvme_fabrics 00:39:42.111 rmmod nvme_keyring 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 1636609 ']' 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 1636609 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1636609 ']' 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1636609 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1636609 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1636609' 00:39:42.111 killing process with pid 1636609 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1636609 00:39:42.111 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1636609 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:42.373 12:23:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:44.289 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:39:44.290 00:39:44.290 real 0m28.219s 00:39:44.290 user 2m11.405s 00:39:44.290 sys 0m12.184s 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:39:44.290 ************************************ 00:39:44.290 END TEST nvmf_fio_target 00:39:44.290 ************************************ 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:44.290 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:44.552 ************************************ 00:39:44.552 START TEST nvmf_bdevio 00:39:44.552 ************************************ 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:39:44.552 * Looking for test storage... 00:39:44.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:39:44.552 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:44.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.553 --rc genhtml_branch_coverage=1 00:39:44.553 --rc genhtml_function_coverage=1 00:39:44.553 --rc genhtml_legend=1 00:39:44.553 --rc geninfo_all_blocks=1 00:39:44.553 --rc geninfo_unexecuted_blocks=1 00:39:44.553 00:39:44.553 ' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:44.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.553 --rc genhtml_branch_coverage=1 00:39:44.553 --rc genhtml_function_coverage=1 00:39:44.553 --rc genhtml_legend=1 00:39:44.553 --rc geninfo_all_blocks=1 00:39:44.553 --rc geninfo_unexecuted_blocks=1 00:39:44.553 00:39:44.553 ' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:44.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.553 --rc genhtml_branch_coverage=1 00:39:44.553 --rc genhtml_function_coverage=1 00:39:44.553 --rc genhtml_legend=1 00:39:44.553 --rc geninfo_all_blocks=1 00:39:44.553 --rc geninfo_unexecuted_blocks=1 00:39:44.553 00:39:44.553 ' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:44.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:44.553 --rc genhtml_branch_coverage=1 00:39:44.553 --rc genhtml_function_coverage=1 00:39:44.553 --rc genhtml_legend=1 00:39:44.553 --rc geninfo_all_blocks=1 00:39:44.553 --rc geninfo_unexecuted_blocks=1 00:39:44.553 00:39:44.553 ' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:39:44.553 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:44.554 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:44.554 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:39:44.815 12:23:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:39:52.958 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:39:52.958 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:52.958 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:39:52.959 Found net devices under 0000:4b:00.0: cvl_0_0 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:39:52.959 Found net devices under 0000:4b:00.1: cvl_0_1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:39:52.959 10.0.0.1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:39:52.959 10.0.0.2 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:39:52.959 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:39:52.960 12:23:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:39:52.960 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:52.960 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.607 ms 00:39:52.960 00:39:52.960 --- 10.0.0.1 ping statistics --- 00:39:52.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.960 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:39:52.960 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:52.960 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.293 ms 00:39:52.960 00:39:52.960 --- 10.0.0.2 ping statistics --- 00:39:52.960 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:52.960 rtt min/avg/max/mdev = 0.293/0.293/0.293/0.000 ms 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:39:52.960 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:39:52.961 ' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=1645016 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 1645016 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1645016 ']' 00:39:52.961 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:52.962 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:52.962 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:52.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:52.962 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:52.962 12:23:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:52.962 [2024-12-05 12:23:17.275394] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:52.962 [2024-12-05 12:23:17.276533] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:52.962 [2024-12-05 12:23:17.276584] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:52.962 [2024-12-05 12:23:17.374864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:52.962 [2024-12-05 12:23:17.428639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:52.962 [2024-12-05 12:23:17.428689] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:52.962 [2024-12-05 12:23:17.428697] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:52.962 [2024-12-05 12:23:17.428705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:52.962 [2024-12-05 12:23:17.428711] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:52.962 [2024-12-05 12:23:17.430751] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:52.962 [2024-12-05 12:23:17.430984] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:52.962 [2024-12-05 12:23:17.431150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:52.962 [2024-12-05 12:23:17.431152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:52.962 [2024-12-05 12:23:17.509268] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:52.962 [2024-12-05 12:23:17.510505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:52.962 [2024-12-05 12:23:17.510537] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:52.962 [2024-12-05 12:23:17.510951] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:52.962 [2024-12-05 12:23:17.511014] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:53.223 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:53.223 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:39:53.223 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:39:53.223 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:53.223 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.223 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.224 [2024-12-05 12:23:18.156195] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.224 Malloc0 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:53.224 [2024-12-05 12:23:18.256536] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:39:53.224 { 00:39:53.224 "params": { 00:39:53.224 "name": "Nvme$subsystem", 00:39:53.224 "trtype": "$TEST_TRANSPORT", 00:39:53.224 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:53.224 "adrfam": "ipv4", 00:39:53.224 "trsvcid": "$NVMF_PORT", 00:39:53.224 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:53.224 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:53.224 "hdgst": ${hdgst:-false}, 00:39:53.224 "ddgst": ${ddgst:-false} 00:39:53.224 }, 00:39:53.224 "method": "bdev_nvme_attach_controller" 00:39:53.224 } 00:39:53.224 EOF 00:39:53.224 )") 00:39:53.224 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:39:53.485 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:39:53.485 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:39:53.485 12:23:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:39:53.485 "params": { 00:39:53.485 "name": "Nvme1", 00:39:53.485 "trtype": "tcp", 00:39:53.485 "traddr": "10.0.0.2", 00:39:53.485 "adrfam": "ipv4", 00:39:53.485 "trsvcid": "4420", 00:39:53.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:53.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:53.485 "hdgst": false, 00:39:53.485 "ddgst": false 00:39:53.485 }, 00:39:53.485 "method": "bdev_nvme_attach_controller" 00:39:53.485 }' 00:39:53.485 [2024-12-05 12:23:18.314118] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:39:53.485 [2024-12-05 12:23:18.314192] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1645363 ] 00:39:53.485 [2024-12-05 12:23:18.408086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:39:53.485 [2024-12-05 12:23:18.465301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:53.485 [2024-12-05 12:23:18.465486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:53.485 [2024-12-05 12:23:18.465541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:53.744 I/O targets: 00:39:53.744 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:39:53.744 00:39:53.744 00:39:53.744 CUnit - A unit testing framework for C - Version 2.1-3 00:39:53.744 http://cunit.sourceforge.net/ 00:39:53.744 00:39:53.744 00:39:53.744 Suite: bdevio tests on: Nvme1n1 00:39:54.004 Test: blockdev write read block ...passed 00:39:54.004 Test: blockdev write zeroes read block ...passed 00:39:54.004 Test: blockdev write zeroes read no split ...passed 00:39:54.004 Test: blockdev write zeroes read split ...passed 00:39:54.004 Test: blockdev write zeroes read split partial ...passed 00:39:54.004 Test: blockdev reset ...[2024-12-05 12:23:18.880481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:39:54.004 [2024-12-05 12:23:18.880576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x549970 (9): Bad file descriptor 00:39:54.004 [2024-12-05 12:23:18.893369] bdev_nvme.c:2286:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:39:54.004 passed 00:39:54.004 Test: blockdev write read 8 blocks ...passed 00:39:54.004 Test: blockdev write read size > 128k ...passed 00:39:54.004 Test: blockdev write read invalid size ...passed 00:39:54.004 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:39:54.004 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:39:54.004 Test: blockdev write read max offset ...passed 00:39:54.266 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:39:54.266 Test: blockdev writev readv 8 blocks ...passed 00:39:54.266 Test: blockdev writev readv 30 x 1block ...passed 00:39:54.266 Test: blockdev writev readv block ...passed 00:39:54.266 Test: blockdev writev readv size > 128k ...passed 00:39:54.266 Test: blockdev writev readv size > 128k in two iovs ...passed 00:39:54.266 Test: blockdev comparev and writev ...[2024-12-05 12:23:19.114572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.114624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.114640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.114650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.115157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.115169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.115184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.115192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.115703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.115715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.115730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.115738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.116244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.116255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.116269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:39:54.266 [2024-12-05 12:23:19.116277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:39:54.266 passed 00:39:54.266 Test: blockdev nvme passthru rw ...passed 00:39:54.266 Test: blockdev nvme passthru vendor specific ...[2024-12-05 12:23:19.201085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.266 [2024-12-05 12:23:19.201102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.201371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.266 [2024-12-05 12:23:19.201382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.201683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.266 [2024-12-05 12:23:19.201695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:39:54.266 [2024-12-05 12:23:19.201924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:39:54.266 [2024-12-05 12:23:19.201935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:39:54.266 passed 00:39:54.266 Test: blockdev nvme admin passthru ...passed 00:39:54.266 Test: blockdev copy ...passed 00:39:54.266 00:39:54.266 Run Summary: Type Total Ran Passed Failed Inactive 00:39:54.266 suites 1 1 n/a 0 0 00:39:54.266 tests 23 23 23 0 0 00:39:54.266 asserts 152 152 152 0 n/a 00:39:54.266 00:39:54.266 Elapsed time = 1.031 seconds 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:39:54.529 rmmod nvme_tcp 00:39:54.529 rmmod nvme_fabrics 00:39:54.529 rmmod nvme_keyring 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 1645016 ']' 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 1645016 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1645016 ']' 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1645016 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1645016 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1645016' 00:39:54.529 killing process with pid 1645016 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1645016 00:39:54.529 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1645016 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:39:54.790 12:23:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:39:57.335 00:39:57.335 real 0m12.452s 00:39:57.335 user 0m10.022s 00:39:57.335 sys 0m6.509s 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:39:57.335 ************************************ 00:39:57.335 END TEST nvmf_bdevio 00:39:57.335 ************************************ 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:39:57.335 00:39:57.335 real 5m3.751s 00:39:57.335 user 10m16.050s 00:39:57.335 sys 2m6.390s 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:57.335 12:23:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:57.335 ************************************ 00:39:57.335 END TEST nvmf_target_core_interrupt_mode 00:39:57.335 ************************************ 00:39:57.335 12:23:21 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:57.335 12:23:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:57.335 12:23:21 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:57.335 12:23:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:39:57.335 ************************************ 00:39:57.335 START TEST nvmf_interrupt 00:39:57.335 ************************************ 00:39:57.335 12:23:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:39:57.335 * Looking for test storage... 00:39:57.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:57.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.335 --rc genhtml_branch_coverage=1 00:39:57.335 --rc genhtml_function_coverage=1 00:39:57.335 --rc genhtml_legend=1 00:39:57.335 --rc geninfo_all_blocks=1 00:39:57.335 --rc geninfo_unexecuted_blocks=1 00:39:57.335 00:39:57.335 ' 00:39:57.335 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:57.335 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.335 --rc genhtml_branch_coverage=1 00:39:57.335 --rc genhtml_function_coverage=1 00:39:57.335 --rc genhtml_legend=1 00:39:57.335 --rc geninfo_all_blocks=1 00:39:57.335 --rc geninfo_unexecuted_blocks=1 00:39:57.335 00:39:57.335 ' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:57.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.336 --rc genhtml_branch_coverage=1 00:39:57.336 --rc genhtml_function_coverage=1 00:39:57.336 --rc genhtml_legend=1 00:39:57.336 --rc geninfo_all_blocks=1 00:39:57.336 --rc geninfo_unexecuted_blocks=1 00:39:57.336 00:39:57.336 ' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:57.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:57.336 --rc genhtml_branch_coverage=1 00:39:57.336 --rc genhtml_function_coverage=1 00:39:57.336 --rc genhtml_legend=1 00:39:57.336 --rc geninfo_all_blocks=1 00:39:57.336 --rc geninfo_unexecuted_blocks=1 00:39:57.336 00:39:57.336 ' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:39:57.336 12:23:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:05.712 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:05.713 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:05.713 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:05.713 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:05.713 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@247 -- # create_target_ns 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:05.713 10.0.0.1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:05.713 10.0.0.2 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:05.713 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:05.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:05.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.629 ms 00:40:05.714 00:40:05.714 --- 10.0.0.1 ping statistics --- 00:40:05.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.714 rtt min/avg/max/mdev = 0.629/0.629/0.629/0.000 ms 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:05.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:05.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:40:05.714 00:40:05.714 --- 10.0.0.2 ping statistics --- 00:40:05.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:05.714 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:40:05.714 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:05.715 ' 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=1649740 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 1649740 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1649740 ']' 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:05.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:05.715 12:23:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.715 [2024-12-05 12:23:29.838985] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:05.715 [2024-12-05 12:23:29.840114] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:05.715 [2024-12-05 12:23:29.840163] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:05.715 [2024-12-05 12:23:29.939330] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:05.715 [2024-12-05 12:23:29.990159] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:05.715 [2024-12-05 12:23:29.990209] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:05.715 [2024-12-05 12:23:29.990217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:05.715 [2024-12-05 12:23:29.990224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:05.715 [2024-12-05 12:23:29.990230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:05.715 [2024-12-05 12:23:29.992083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:05.715 [2024-12-05 12:23:29.992086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:05.715 [2024-12-05 12:23:30.079677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:05.715 [2024-12-05 12:23:30.080484] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:05.715 [2024-12-05 12:23:30.080664] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:05.715 5000+0 records in 00:40:05.715 5000+0 records out 00:40:05.715 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0184944 s, 554 MB/s 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.715 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.976 AIO0 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.976 [2024-12-05 12:23:30.773106] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:05.976 [2024-12-05 12:23:30.817591] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1649740 0 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1649740 0 idle 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:05.976 12:23:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649740 root 20 0 128.2g 43776 32256 R 0.0 0.0 0:00.32 reactor_0' 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649740 root 20 0 128.2g 43776 32256 R 0.0 0.0 0:00.32 reactor_0 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1649740 1 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1649740 1 idle 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:05.976 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649744 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1' 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649744 root 20 0 128.2g 43776 32256 S 0.0 0.0 0:00.00 reactor_1 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1650104 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1649740 0 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1649740 0 busy 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:06.237 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649740 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.40 reactor_0' 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649740 root 20 0 128.2g 44928 32256 R 46.7 0.0 0:00.40 reactor_0 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=46.7 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=46 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1649740 1 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1649740 1 busy 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:06.498 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649744 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1' 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649744 root 20 0 128.2g 44928 32256 R 99.9 0.0 0:00.22 reactor_1 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:06.760 12:23:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1650104 00:40:16.761 Initializing NVMe Controllers 00:40:16.761 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:16.761 Controller IO queue size 256, less than required. 00:40:16.761 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:16.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:16.761 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:16.761 Initialization complete. Launching workers. 00:40:16.761 ======================================================== 00:40:16.761 Latency(us) 00:40:16.761 Device Information : IOPS MiB/s Average min max 00:40:16.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 19638.69 76.71 13040.08 3822.09 32096.01 00:40:16.761 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 20435.89 79.83 12528.01 8222.66 29773.96 00:40:16.761 ======================================================== 00:40:16.761 Total : 40074.59 156.54 12778.95 3822.09 32096.01 00:40:16.761 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1649740 0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1649740 0 idle 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649740 root 20 0 128.2g 44928 32256 R 0.0 0.0 0:20.32 reactor_0' 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649740 root 20 0 128.2g 44928 32256 R 0.0 0.0 0:20.32 reactor_0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1649740 1 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1649740 1 idle 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649744 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1' 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649744 root 20 0 128.2g 44928 32256 S 0.0 0.0 0:10.00 reactor_1 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:16.761 12:23:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:17.701 12:23:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:17.701 12:23:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:17.701 12:23:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:17.701 12:23:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:17.701 12:23:42 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1649740 0 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1649740 0 idle 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:19.613 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649740 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0' 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649740 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:20.69 reactor_0 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:19.614 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1649740 1 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1649740 1 idle 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1649740 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1649740 -w 256 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1649744 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1' 00:40:19.874 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1649744 root 20 0 128.2g 79488 32256 S 0.0 0.1 0:10.14 reactor_1 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:19.875 12:23:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:20.136 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:20.136 rmmod nvme_tcp 00:40:20.136 rmmod nvme_fabrics 00:40:20.136 rmmod nvme_keyring 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 1649740 ']' 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 1649740 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1649740 ']' 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1649740 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:20.136 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1649740 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1649740' 00:40:20.395 killing process with pid 1649740 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1649740 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1649740 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@254 -- # local dev 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:40:20.395 12:23:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # return 0 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@274 -- # iptr 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-save 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-restore 00:40:22.936 00:40:22.936 real 0m25.512s 00:40:22.936 user 0m40.190s 00:40:22.936 sys 0m9.913s 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.936 12:23:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:22.936 ************************************ 00:40:22.936 END TEST nvmf_interrupt 00:40:22.936 ************************************ 00:40:22.936 00:40:22.936 real 30m5.626s 00:40:22.936 user 61m9.825s 00:40:22.936 sys 10m19.340s 00:40:22.936 12:23:47 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:22.936 12:23:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.936 ************************************ 00:40:22.936 END TEST nvmf_tcp 00:40:22.936 ************************************ 00:40:22.936 12:23:47 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:22.936 12:23:47 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:22.936 12:23:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:22.936 12:23:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:22.937 12:23:47 -- common/autotest_common.sh@10 -- # set +x 00:40:22.937 ************************************ 00:40:22.937 START TEST spdkcli_nvmf_tcp 00:40:22.937 ************************************ 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:22.937 * Looking for test storage... 00:40:22.937 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.937 --rc genhtml_branch_coverage=1 00:40:22.937 --rc genhtml_function_coverage=1 00:40:22.937 --rc genhtml_legend=1 00:40:22.937 --rc geninfo_all_blocks=1 00:40:22.937 --rc geninfo_unexecuted_blocks=1 00:40:22.937 00:40:22.937 ' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.937 --rc genhtml_branch_coverage=1 00:40:22.937 --rc genhtml_function_coverage=1 00:40:22.937 --rc genhtml_legend=1 00:40:22.937 --rc geninfo_all_blocks=1 00:40:22.937 --rc geninfo_unexecuted_blocks=1 00:40:22.937 00:40:22.937 ' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.937 --rc genhtml_branch_coverage=1 00:40:22.937 --rc genhtml_function_coverage=1 00:40:22.937 --rc genhtml_legend=1 00:40:22.937 --rc geninfo_all_blocks=1 00:40:22.937 --rc geninfo_unexecuted_blocks=1 00:40:22.937 00:40:22.937 ' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:22.937 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:22.937 --rc genhtml_branch_coverage=1 00:40:22.937 --rc genhtml_function_coverage=1 00:40:22.937 --rc genhtml_legend=1 00:40:22.937 --rc geninfo_all_blocks=1 00:40:22.937 --rc geninfo_unexecuted_blocks=1 00:40:22.937 00:40:22.937 ' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:40:22.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1653283 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1653283 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1653283 ']' 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:22.937 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:22.938 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:22.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:22.938 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:22.938 12:23:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:22.938 [2024-12-05 12:23:47.866313] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:22.938 [2024-12-05 12:23:47.866371] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1653283 ] 00:40:22.938 [2024-12-05 12:23:47.955445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:23.198 [2024-12-05 12:23:48.002010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:23.198 [2024-12-05 12:23:48.002013] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:23.770 12:23:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:23.770 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:23.770 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:23.770 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:23.770 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:23.770 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:23.770 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:23.770 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:23.770 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:23.770 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:23.770 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:23.770 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.770 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:23.770 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:23.770 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:23.771 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:23.771 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:23.771 ' 00:40:27.071 [2024-12-05 12:23:51.489487] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:28.011 [2024-12-05 12:23:52.845647] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:40:30.551 [2024-12-05 12:23:55.372666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:40:33.105 [2024-12-05 12:23:57.574943] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:40:34.489 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:40:34.489 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:40:34.489 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:40:34.489 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:40:34.489 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:40:34.489 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:40:34.489 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:40:34.489 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:34.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:40:34.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:40:34.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:34.489 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:34.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:40:34.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:34.489 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:34.489 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:40:34.490 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:40:34.490 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:40:34.490 12:23:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:40:34.750 12:23:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:40:35.011 12:23:59 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:35.012 12:23:59 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:40:35.012 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:40:35.012 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:35.012 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:40:35.012 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:40:35.012 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:40:35.012 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:40:35.012 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:40:35.012 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:40:35.012 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:40:35.012 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:40:35.012 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:40:35.012 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:40:35.012 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:40:35.012 ' 00:40:41.597 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:40:41.597 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:40:41.597 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:41.597 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:40:41.597 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:40:41.597 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:40:41.597 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:40:41.597 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:40:41.597 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:40:41.597 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:40:41.597 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:40:41.597 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:40:41.597 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:40:41.597 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1653283 ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1653283' 00:40:41.597 killing process with pid 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1653283 ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1653283 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1653283 ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1653283 00:40:41.597 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1653283) - No such process 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1653283 is not found' 00:40:41.597 Process with pid 1653283 is not found 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:40:41.597 00:40:41.597 real 0m18.182s 00:40:41.597 user 0m40.408s 00:40:41.597 sys 0m0.851s 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:41.597 12:24:05 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:41.597 ************************************ 00:40:41.597 END TEST spdkcli_nvmf_tcp 00:40:41.597 ************************************ 00:40:41.597 12:24:05 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:41.597 12:24:05 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:41.597 12:24:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:41.597 12:24:05 -- common/autotest_common.sh@10 -- # set +x 00:40:41.597 ************************************ 00:40:41.597 START TEST nvmf_identify_passthru 00:40:41.597 ************************************ 00:40:41.597 12:24:05 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:40:41.597 * Looking for test storage... 00:40:41.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:41.597 12:24:05 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:41.597 12:24:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:40:41.597 12:24:05 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:41.597 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:41.597 12:24:06 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:40:41.597 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:41.597 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.597 --rc genhtml_branch_coverage=1 00:40:41.597 --rc genhtml_function_coverage=1 00:40:41.597 --rc genhtml_legend=1 00:40:41.597 --rc geninfo_all_blocks=1 00:40:41.597 --rc geninfo_unexecuted_blocks=1 00:40:41.597 00:40:41.597 ' 00:40:41.597 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.597 --rc genhtml_branch_coverage=1 00:40:41.597 --rc genhtml_function_coverage=1 00:40:41.597 --rc genhtml_legend=1 00:40:41.597 --rc geninfo_all_blocks=1 00:40:41.597 --rc geninfo_unexecuted_blocks=1 00:40:41.597 00:40:41.597 ' 00:40:41.597 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.597 --rc genhtml_branch_coverage=1 00:40:41.597 --rc genhtml_function_coverage=1 00:40:41.597 --rc genhtml_legend=1 00:40:41.597 --rc geninfo_all_blocks=1 00:40:41.597 --rc geninfo_unexecuted_blocks=1 00:40:41.597 00:40:41.597 ' 00:40:41.597 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:41.597 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:41.597 --rc genhtml_branch_coverage=1 00:40:41.597 --rc genhtml_function_coverage=1 00:40:41.597 --rc genhtml_legend=1 00:40:41.597 --rc geninfo_all_blocks=1 00:40:41.597 --rc geninfo_unexecuted_blocks=1 00:40:41.597 00:40:41.597 ' 00:40:41.598 12:24:06 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:40:41.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:41.598 12:24:06 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:41.598 12:24:06 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:40:41.598 12:24:06 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:41.598 12:24:06 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:41.598 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:40:41.598 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:41.598 12:24:06 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:40:41.598 12:24:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:40:49.740 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:40:49.740 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:40:49.741 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:40:49.741 Found net devices under 0000:4b:00.0: cvl_0_0 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:40:49.741 Found net devices under 0000:4b:00.1: cvl_0_1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@247 -- # create_target_ns 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:40:49.741 10.0.0.1 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:40:49.741 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:40:49.742 10.0.0.2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:40:49.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:49.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.594 ms 00:40:49.742 00:40:49.742 --- 10.0.0.1 ping statistics --- 00:40:49.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.742 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:40:49.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:49.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:40:49.742 00:40:49.742 --- 10.0.0.2 ping statistics --- 00:40:49.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:49.742 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:40:49.742 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:40:49.743 ' 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:40:49.743 12:24:13 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:65:00.0 00:40:49.743 12:24:13 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:65:00.0 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:40:49.743 12:24:13 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:40:49.743 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:40:49.743 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:40:49.743 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:40:49.743 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1661285 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:40:50.004 12:24:14 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1661285 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1661285 ']' 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:50.004 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:50.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:50.005 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:50.005 12:24:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:50.005 [2024-12-05 12:24:14.995886] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:40:50.005 [2024-12-05 12:24:14.995954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:50.270 [2024-12-05 12:24:15.093598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:50.270 [2024-12-05 12:24:15.147301] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:50.270 [2024-12-05 12:24:15.147371] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:50.270 [2024-12-05 12:24:15.147380] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:50.270 [2024-12-05 12:24:15.147388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:50.270 [2024-12-05 12:24:15.147394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:50.270 [2024-12-05 12:24:15.149528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:50.270 [2024-12-05 12:24:15.149688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:50.270 [2024-12-05 12:24:15.149851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:50.270 [2024-12-05 12:24:15.149852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:50.839 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:50.839 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:40:50.839 12:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:40:50.839 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.839 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:50.839 INFO: Log level set to 20 00:40:50.840 INFO: Requests: 00:40:50.840 { 00:40:50.840 "jsonrpc": "2.0", 00:40:50.840 "method": "nvmf_set_config", 00:40:50.840 "id": 1, 00:40:50.840 "params": { 00:40:50.840 "admin_cmd_passthru": { 00:40:50.840 "identify_ctrlr": true 00:40:50.840 } 00:40:50.840 } 00:40:50.840 } 00:40:50.840 00:40:50.840 INFO: response: 00:40:50.840 { 00:40:50.840 "jsonrpc": "2.0", 00:40:50.840 "id": 1, 00:40:50.840 "result": true 00:40:50.840 } 00:40:50.840 00:40:50.840 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:50.840 12:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:40:50.840 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:50.840 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:50.840 INFO: Setting log level to 20 00:40:50.840 INFO: Setting log level to 20 00:40:50.840 INFO: Log level set to 20 00:40:50.840 INFO: Log level set to 20 00:40:50.840 INFO: Requests: 00:40:50.840 { 00:40:50.840 "jsonrpc": "2.0", 00:40:50.840 "method": "framework_start_init", 00:40:50.840 "id": 1 00:40:50.840 } 00:40:50.840 00:40:50.840 INFO: Requests: 00:40:50.840 { 00:40:50.840 "jsonrpc": "2.0", 00:40:50.840 "method": "framework_start_init", 00:40:50.840 "id": 1 00:40:50.840 } 00:40:50.840 00:40:51.100 [2024-12-05 12:24:15.917661] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:40:51.100 INFO: response: 00:40:51.100 { 00:40:51.100 "jsonrpc": "2.0", 00:40:51.100 "id": 1, 00:40:51.100 "result": true 00:40:51.100 } 00:40:51.100 00:40:51.100 INFO: response: 00:40:51.100 { 00:40:51.100 "jsonrpc": "2.0", 00:40:51.100 "id": 1, 00:40:51.100 "result": true 00:40:51.100 } 00:40:51.100 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.100 12:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.100 INFO: Setting log level to 40 00:40:51.100 INFO: Setting log level to 40 00:40:51.100 INFO: Setting log level to 40 00:40:51.100 [2024-12-05 12:24:15.931239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.100 12:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.100 12:24:15 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.100 12:24:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.360 Nvme0n1 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.360 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.360 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.360 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.360 [2024-12-05 12:24:16.330624] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.360 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.360 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.360 [ 00:40:51.360 { 00:40:51.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:40:51.360 "subtype": "Discovery", 00:40:51.360 "listen_addresses": [], 00:40:51.360 "allow_any_host": true, 00:40:51.360 "hosts": [] 00:40:51.360 }, 00:40:51.360 { 00:40:51.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:40:51.360 "subtype": "NVMe", 00:40:51.360 "listen_addresses": [ 00:40:51.360 { 00:40:51.360 "trtype": "TCP", 00:40:51.360 "adrfam": "IPv4", 00:40:51.360 "traddr": "10.0.0.2", 00:40:51.360 "trsvcid": "4420" 00:40:51.360 } 00:40:51.361 ], 00:40:51.361 "allow_any_host": true, 00:40:51.361 "hosts": [], 00:40:51.361 "serial_number": "SPDK00000000000001", 00:40:51.361 "model_number": "SPDK bdev Controller", 00:40:51.361 "max_namespaces": 1, 00:40:51.361 "min_cntlid": 1, 00:40:51.361 "max_cntlid": 65519, 00:40:51.361 "namespaces": [ 00:40:51.361 { 00:40:51.361 "nsid": 1, 00:40:51.361 "bdev_name": "Nvme0n1", 00:40:51.361 "name": "Nvme0n1", 00:40:51.361 "nguid": "36344730526054870025384500000044", 00:40:51.361 "uuid": "36344730-5260-5487-0025-384500000044" 00:40:51.361 } 00:40:51.361 ] 00:40:51.361 } 00:40:51.361 ] 00:40:51.361 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.361 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:51.361 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:40:51.361 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:40:51.621 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:40:51.621 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:51.621 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:40:51.621 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:40:51.881 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:40:51.881 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:51.881 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:40:51.881 12:24:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:40:51.881 rmmod nvme_tcp 00:40:51.881 rmmod nvme_fabrics 00:40:51.881 rmmod nvme_keyring 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 1661285 ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 1661285 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1661285 ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1661285 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1661285 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1661285' 00:40:51.881 killing process with pid 1661285 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1661285 00:40:51.881 12:24:16 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1661285 00:40:52.142 12:24:17 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:40:52.142 12:24:17 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:40:52.142 12:24:17 nvmf_identify_passthru -- nvmf/setup.sh@254 -- # local dev 00:40:52.142 12:24:17 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # remove_target_ns 00:40:52.142 12:24:17 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:52.142 12:24:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:40:52.142 12:24:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # delete_main_bridge 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # return 0 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/setup.sh@274 -- # iptr 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-save 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:40:54.690 12:24:19 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-restore 00:40:54.690 00:40:54.690 real 0m13.396s 00:40:54.690 user 0m10.293s 00:40:54.690 sys 0m6.866s 00:40:54.690 12:24:19 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.690 12:24:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:40:54.690 ************************************ 00:40:54.690 END TEST nvmf_identify_passthru 00:40:54.690 ************************************ 00:40:54.690 12:24:19 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:54.690 12:24:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:40:54.690 12:24:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.690 12:24:19 -- common/autotest_common.sh@10 -- # set +x 00:40:54.690 ************************************ 00:40:54.690 START TEST nvmf_dif 00:40:54.690 ************************************ 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:40:54.690 * Looking for test storage... 00:40:54.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:54.690 12:24:19 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:54.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.690 --rc genhtml_branch_coverage=1 00:40:54.690 --rc genhtml_function_coverage=1 00:40:54.690 --rc genhtml_legend=1 00:40:54.690 --rc geninfo_all_blocks=1 00:40:54.690 --rc geninfo_unexecuted_blocks=1 00:40:54.690 00:40:54.690 ' 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:54.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.690 --rc genhtml_branch_coverage=1 00:40:54.690 --rc genhtml_function_coverage=1 00:40:54.690 --rc genhtml_legend=1 00:40:54.690 --rc geninfo_all_blocks=1 00:40:54.690 --rc geninfo_unexecuted_blocks=1 00:40:54.690 00:40:54.690 ' 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:54.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.690 --rc genhtml_branch_coverage=1 00:40:54.690 --rc genhtml_function_coverage=1 00:40:54.690 --rc genhtml_legend=1 00:40:54.690 --rc geninfo_all_blocks=1 00:40:54.690 --rc geninfo_unexecuted_blocks=1 00:40:54.690 00:40:54.690 ' 00:40:54.690 12:24:19 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:54.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.690 --rc genhtml_branch_coverage=1 00:40:54.690 --rc genhtml_function_coverage=1 00:40:54.690 --rc genhtml_legend=1 00:40:54.690 --rc geninfo_all_blocks=1 00:40:54.690 --rc geninfo_unexecuted_blocks=1 00:40:54.690 00:40:54.690 ' 00:40:54.690 12:24:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:40:54.690 12:24:19 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:54.691 12:24:19 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:40:54.691 12:24:19 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:54.691 12:24:19 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:54.691 12:24:19 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:54.691 12:24:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.691 12:24:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.691 12:24:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.691 12:24:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:40:54.691 12:24:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:40:54.691 12:24:19 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:40:54.691 12:24:19 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:40:54.691 12:24:19 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:40:54.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:40:54.691 12:24:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:40:54.691 12:24:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:40:54.691 12:24:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:40:54.691 12:24:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:40:54.691 12:24:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:40:54.691 12:24:19 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:40:54.691 12:24:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:40:54.691 12:24:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:40:54.691 12:24:19 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:40:54.691 12:24:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:41:02.885 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:41:02.885 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:41:02.885 Found net devices under 0000:4b:00.0: cvl_0_0 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:41:02.885 Found net devices under 0000:4b:00.1: cvl_0_1 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:41:02.885 12:24:26 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@247 -- # create_target_ns 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:41:02.885 12:24:26 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:41:02.886 10.0.0.1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:41:02.886 10.0.0.2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:41:02.886 12:24:26 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:41:02.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:02.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.616 ms 00:41:02.886 00:41:02.886 --- 10.0.0.1 ping statistics --- 00:41:02.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.886 rtt min/avg/max/mdev = 0.616/0.616/0.616/0.000 ms 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:41:02.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:02.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:41:02.886 00:41:02.886 --- 10.0.0.2 ping statistics --- 00:41:02.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.886 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:41:02.886 12:24:26 nvmf_dif -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:02.886 12:24:26 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:41:02.886 12:24:26 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:41:02.886 12:24:26 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:05.431 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:41:05.431 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:41:05.431 12:24:30 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:41:05.431 12:24:30 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:41:05.692 ' 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:41:05.692 12:24:30 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:05.692 12:24:30 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=1667369 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 1667369 00:41:05.692 12:24:30 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1667369 ']' 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:05.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:05.692 12:24:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:05.692 [2024-12-05 12:24:30.604530] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:41:05.692 [2024-12-05 12:24:30.604591] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:05.692 [2024-12-05 12:24:30.704320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:05.953 [2024-12-05 12:24:30.756235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:05.953 [2024-12-05 12:24:30.756287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:05.953 [2024-12-05 12:24:30.756296] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:05.953 [2024-12-05 12:24:30.756303] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:05.953 [2024-12-05 12:24:30.756309] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:05.953 [2024-12-05 12:24:30.757055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:06.525 12:24:31 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 12:24:31 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:06.525 12:24:31 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:06.525 12:24:31 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 [2024-12-05 12:24:31.444433] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.525 12:24:31 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 ************************************ 00:41:06.525 START TEST fio_dif_1_default 00:41:06.525 ************************************ 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 bdev_null0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:06.525 [2024-12-05 12:24:31.528804] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:06.525 { 00:41:06.525 "params": { 00:41:06.525 "name": "Nvme$subsystem", 00:41:06.525 "trtype": "$TEST_TRANSPORT", 00:41:06.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:06.525 "adrfam": "ipv4", 00:41:06.525 "trsvcid": "$NVMF_PORT", 00:41:06.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:06.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:06.525 "hdgst": ${hdgst:-false}, 00:41:06.525 "ddgst": ${ddgst:-false} 00:41:06.525 }, 00:41:06.525 "method": "bdev_nvme_attach_controller" 00:41:06.525 } 00:41:06.525 EOF 00:41:06.525 )") 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:41:06.525 12:24:31 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:06.525 "params": { 00:41:06.525 "name": "Nvme0", 00:41:06.525 "trtype": "tcp", 00:41:06.525 "traddr": "10.0.0.2", 00:41:06.525 "adrfam": "ipv4", 00:41:06.525 "trsvcid": "4420", 00:41:06.525 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:06.525 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:06.525 "hdgst": false, 00:41:06.525 "ddgst": false 00:41:06.525 }, 00:41:06.525 "method": "bdev_nvme_attach_controller" 00:41:06.525 }' 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:06.813 12:24:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:07.081 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:07.081 fio-3.35 00:41:07.081 Starting 1 thread 00:41:19.307 00:41:19.307 filename0: (groupid=0, jobs=1): err= 0: pid=1667978: Thu Dec 5 12:24:42 2024 00:41:19.307 read: IOPS=281, BW=1125KiB/s (1152kB/s)(11.0MiB/10040msec) 00:41:19.307 slat (nsec): min=5536, max=61958, avg=7085.61, stdev=1777.60 00:41:19.307 clat (usec): min=517, max=46025, avg=14200.56, stdev=18966.87 00:41:19.307 lat (usec): min=522, max=46060, avg=14207.64, stdev=18966.44 00:41:19.307 clat percentiles (usec): 00:41:19.307 | 1.00th=[ 619], 5.00th=[ 750], 10.00th=[ 766], 20.00th=[ 807], 00:41:19.307 | 30.00th=[ 898], 40.00th=[ 963], 50.00th=[ 988], 60.00th=[ 1012], 00:41:19.307 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:19.307 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45876], 99.95th=[45876], 00:41:19.307 | 99.99th=[45876] 00:41:19.307 bw ( KiB/s): min= 704, max= 4384, per=100.00%, avg=1128.00, stdev=1097.71, samples=20 00:41:19.307 iops : min= 176, max= 1096, avg=282.00, stdev=274.43, samples=20 00:41:19.307 lat (usec) : 750=5.77%, 1000=49.68% 00:41:19.307 lat (msec) : 2=11.54%, 50=33.00% 00:41:19.307 cpu : usr=93.23%, sys=6.52%, ctx=14, majf=0, minf=253 00:41:19.307 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:19.307 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.307 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:19.307 issued rwts: total=2824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:19.307 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:19.307 00:41:19.307 Run status group 0 (all jobs): 00:41:19.307 READ: bw=1125KiB/s (1152kB/s), 1125KiB/s-1125KiB/s (1152kB/s-1152kB/s), io=11.0MiB (11.6MB), run=10040-10040msec 00:41:19.307 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 00:41:19.308 real 0m11.389s 00:41:19.308 user 0m27.732s 00:41:19.308 sys 0m1.012s 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 ************************************ 00:41:19.308 END TEST fio_dif_1_default 00:41:19.308 ************************************ 00:41:19.308 12:24:42 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:19.308 12:24:42 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:19.308 12:24:42 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 ************************************ 00:41:19.308 START TEST fio_dif_1_multi_subsystems 00:41:19.308 ************************************ 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 bdev_null0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:42 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 [2024-12-05 12:24:43.001136] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 bdev_null1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:19.308 { 00:41:19.308 "params": { 00:41:19.308 "name": "Nvme$subsystem", 00:41:19.308 "trtype": "$TEST_TRANSPORT", 00:41:19.308 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.308 "adrfam": "ipv4", 00:41:19.308 "trsvcid": "$NVMF_PORT", 00:41:19.308 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.308 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.308 "hdgst": ${hdgst:-false}, 00:41:19.308 "ddgst": ${ddgst:-false} 00:41:19.308 }, 00:41:19.308 "method": "bdev_nvme_attach_controller" 00:41:19.308 } 00:41:19.308 EOF 00:41:19.308 )") 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:19.308 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:19.309 { 00:41:19.309 "params": { 00:41:19.309 "name": "Nvme$subsystem", 00:41:19.309 "trtype": "$TEST_TRANSPORT", 00:41:19.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:19.309 "adrfam": "ipv4", 00:41:19.309 "trsvcid": "$NVMF_PORT", 00:41:19.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:19.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:19.309 "hdgst": ${hdgst:-false}, 00:41:19.309 "ddgst": ${ddgst:-false} 00:41:19.309 }, 00:41:19.309 "method": "bdev_nvme_attach_controller" 00:41:19.309 } 00:41:19.309 EOF 00:41:19.309 )") 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:19.309 "params": { 00:41:19.309 "name": "Nvme0", 00:41:19.309 "trtype": "tcp", 00:41:19.309 "traddr": "10.0.0.2", 00:41:19.309 "adrfam": "ipv4", 00:41:19.309 "trsvcid": "4420", 00:41:19.309 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:19.309 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:19.309 "hdgst": false, 00:41:19.309 "ddgst": false 00:41:19.309 }, 00:41:19.309 "method": "bdev_nvme_attach_controller" 00:41:19.309 },{ 00:41:19.309 "params": { 00:41:19.309 "name": "Nvme1", 00:41:19.309 "trtype": "tcp", 00:41:19.309 "traddr": "10.0.0.2", 00:41:19.309 "adrfam": "ipv4", 00:41:19.309 "trsvcid": "4420", 00:41:19.309 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:19.309 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:19.309 "hdgst": false, 00:41:19.309 "ddgst": false 00:41:19.309 }, 00:41:19.309 "method": "bdev_nvme_attach_controller" 00:41:19.309 }' 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:19.309 12:24:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:19.309 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:19.309 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:19.309 fio-3.35 00:41:19.309 Starting 2 threads 00:41:29.358 00:41:29.358 filename0: (groupid=0, jobs=1): err= 0: pid=1670219: Thu Dec 5 12:24:54 2024 00:41:29.358 read: IOPS=197, BW=791KiB/s (810kB/s)(7920KiB/10016msec) 00:41:29.358 slat (nsec): min=5537, max=32636, avg=6385.64, stdev=1319.66 00:41:29.358 clat (usec): min=543, max=41946, avg=20217.01, stdev=20196.86 00:41:29.358 lat (usec): min=551, max=41979, avg=20223.40, stdev=20196.81 00:41:29.358 clat percentiles (usec): 00:41:29.358 | 1.00th=[ 570], 5.00th=[ 742], 10.00th=[ 758], 20.00th=[ 775], 00:41:29.358 | 30.00th=[ 799], 40.00th=[ 824], 50.00th=[ 865], 60.00th=[41157], 00:41:29.358 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:29.358 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[42206], 00:41:29.358 | 99.99th=[42206] 00:41:29.358 bw ( KiB/s): min= 768, max= 896, per=51.00%, avg=790.40, stdev=42.93, samples=20 00:41:29.358 iops : min= 192, max= 224, avg=197.60, stdev=10.73, samples=20 00:41:29.358 lat (usec) : 750=7.17%, 1000=44.70% 00:41:29.358 lat (msec) : 2=0.05%, 50=48.08% 00:41:29.358 cpu : usr=95.29%, sys=4.49%, ctx=31, majf=0, minf=69 00:41:29.358 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.358 issued rwts: total=1980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.358 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:29.358 filename1: (groupid=0, jobs=1): err= 0: pid=1670220: Thu Dec 5 12:24:54 2024 00:41:29.358 read: IOPS=190, BW=760KiB/s (778kB/s)(7632KiB/10039msec) 00:41:29.358 slat (nsec): min=5524, max=33422, avg=6481.62, stdev=1429.01 00:41:29.358 clat (usec): min=511, max=42837, avg=21028.25, stdev=20181.43 00:41:29.358 lat (usec): min=517, max=42871, avg=21034.74, stdev=20181.41 00:41:29.358 clat percentiles (usec): 00:41:29.358 | 1.00th=[ 578], 5.00th=[ 775], 10.00th=[ 799], 20.00th=[ 816], 00:41:29.358 | 30.00th=[ 832], 40.00th=[ 840], 50.00th=[41157], 60.00th=[41157], 00:41:29.358 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:29.358 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:41:29.358 | 99.99th=[42730] 00:41:29.358 bw ( KiB/s): min= 704, max= 768, per=49.12%, avg=761.60, stdev=19.70, samples=20 00:41:29.358 iops : min= 176, max= 192, avg=190.40, stdev= 4.92, samples=20 00:41:29.358 lat (usec) : 750=2.73%, 1000=47.17% 00:41:29.358 lat (msec) : 50=50.10% 00:41:29.358 cpu : usr=95.36%, sys=4.43%, ctx=12, majf=0, minf=187 00:41:29.358 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:29.358 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.358 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:29.358 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:29.358 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:29.358 00:41:29.358 Run status group 0 (all jobs): 00:41:29.359 READ: bw=1549KiB/s (1586kB/s), 760KiB/s-791KiB/s (778kB/s-810kB/s), io=15.2MiB (15.9MB), run=10016-10039msec 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.359 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 00:41:29.619 real 0m11.493s 00:41:29.619 user 0m31.675s 00:41:29.619 sys 0m1.223s 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 ************************************ 00:41:29.619 END TEST fio_dif_1_multi_subsystems 00:41:29.619 ************************************ 00:41:29.619 12:24:54 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:29.619 12:24:54 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:29.619 12:24:54 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 ************************************ 00:41:29.619 START TEST fio_dif_rand_params 00:41:29.619 ************************************ 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 bdev_null0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:29.619 [2024-12-05 12:24:54.576833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:29.619 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:29.619 { 00:41:29.619 "params": { 00:41:29.619 "name": "Nvme$subsystem", 00:41:29.619 "trtype": "$TEST_TRANSPORT", 00:41:29.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:29.619 "adrfam": "ipv4", 00:41:29.619 "trsvcid": "$NVMF_PORT", 00:41:29.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:29.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:29.620 "hdgst": ${hdgst:-false}, 00:41:29.620 "ddgst": ${ddgst:-false} 00:41:29.620 }, 00:41:29.620 "method": "bdev_nvme_attach_controller" 00:41:29.620 } 00:41:29.620 EOF 00:41:29.620 )") 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:29.620 "params": { 00:41:29.620 "name": "Nvme0", 00:41:29.620 "trtype": "tcp", 00:41:29.620 "traddr": "10.0.0.2", 00:41:29.620 "adrfam": "ipv4", 00:41:29.620 "trsvcid": "4420", 00:41:29.620 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:29.620 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:29.620 "hdgst": false, 00:41:29.620 "ddgst": false 00:41:29.620 }, 00:41:29.620 "method": "bdev_nvme_attach_controller" 00:41:29.620 }' 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:29.620 12:24:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:30.213 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:30.213 ... 00:41:30.213 fio-3.35 00:41:30.213 Starting 3 threads 00:41:36.783 00:41:36.783 filename0: (groupid=0, jobs=1): err= 0: pid=1672510: Thu Dec 5 12:25:00 2024 00:41:36.783 read: IOPS=313, BW=39.2MiB/s (41.1MB/s)(198MiB/5047msec) 00:41:36.783 slat (nsec): min=5558, max=32298, avg=8452.77, stdev=1827.12 00:41:36.783 clat (usec): min=4813, max=50581, avg=9526.43, stdev=3160.08 00:41:36.783 lat (usec): min=4821, max=50591, avg=9534.89, stdev=3160.47 00:41:36.783 clat percentiles (usec): 00:41:36.783 | 1.00th=[ 4948], 5.00th=[ 6718], 10.00th=[ 7308], 20.00th=[ 7767], 00:41:36.783 | 30.00th=[ 8160], 40.00th=[ 8717], 50.00th=[ 9372], 60.00th=[10028], 00:41:36.783 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11469], 95.00th=[11863], 00:41:36.783 | 99.00th=[12911], 99.50th=[45876], 99.90th=[47449], 99.95th=[50594], 00:41:36.783 | 99.99th=[50594] 00:41:36.783 bw ( KiB/s): min=35584, max=45568, per=34.30%, avg=40473.60, stdev=2642.44, samples=10 00:41:36.783 iops : min= 278, max= 356, avg=316.20, stdev=20.64, samples=10 00:41:36.783 lat (msec) : 10=57.86%, 20=41.63%, 50=0.44%, 100=0.06% 00:41:36.783 cpu : usr=94.05%, sys=5.69%, ctx=7, majf=0, minf=35 00:41:36.783 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.783 issued rwts: total=1583,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:36.783 filename0: (groupid=0, jobs=1): err= 0: pid=1672512: Thu Dec 5 12:25:00 2024 00:41:36.783 read: IOPS=293, BW=36.7MiB/s (38.5MB/s)(185MiB/5043msec) 00:41:36.783 slat (nsec): min=5631, max=31995, avg=8434.38, stdev=1984.19 00:41:36.783 clat (usec): min=5333, max=91237, avg=10176.26, stdev=6300.50 00:41:36.783 lat (usec): min=5349, max=91244, avg=10184.70, stdev=6300.41 00:41:36.783 clat percentiles (usec): 00:41:36.783 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 7046], 20.00th=[ 7635], 00:41:36.783 | 30.00th=[ 7963], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10421], 00:41:36.783 | 70.00th=[11076], 80.00th=[11469], 90.00th=[11994], 95.00th=[12387], 00:41:36.783 | 99.00th=[47449], 99.50th=[50594], 99.90th=[91751], 99.95th=[91751], 00:41:36.783 | 99.99th=[91751] 00:41:36.783 bw ( KiB/s): min=26624, max=44800, per=32.09%, avg=37862.40, stdev=5658.38, samples=10 00:41:36.783 iops : min= 208, max= 350, avg=295.80, stdev=44.21, samples=10 00:41:36.783 lat (msec) : 10=54.22%, 20=44.29%, 50=0.88%, 100=0.61% 00:41:36.783 cpu : usr=94.61%, sys=5.14%, ctx=10, majf=0, minf=108 00:41:36.783 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.783 issued rwts: total=1481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:36.783 filename0: (groupid=0, jobs=1): err= 0: pid=1672513: Thu Dec 5 12:25:00 2024 00:41:36.783 read: IOPS=314, BW=39.3MiB/s (41.3MB/s)(199MiB/5045msec) 00:41:36.783 slat (nsec): min=5553, max=33696, avg=8316.70, stdev=1766.63 00:41:36.783 clat (usec): min=3953, max=88940, avg=9493.13, stdev=11001.95 00:41:36.783 lat (usec): min=3959, max=88948, avg=9501.45, stdev=11002.03 00:41:36.783 clat percentiles (usec): 00:41:36.783 | 1.00th=[ 4621], 5.00th=[ 5211], 10.00th=[ 5669], 20.00th=[ 6063], 00:41:36.783 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6652], 60.00th=[ 6783], 00:41:36.783 | 70.00th=[ 7046], 80.00th=[ 7308], 90.00th=[ 7898], 95.00th=[47449], 00:41:36.783 | 99.00th=[49021], 99.50th=[49546], 99.90th=[88605], 99.95th=[88605], 00:41:36.783 | 99.99th=[88605] 00:41:36.783 bw ( KiB/s): min=20480, max=60928, per=34.41%, avg=40601.60, stdev=13592.50, samples=10 00:41:36.783 iops : min= 160, max= 476, avg=317.20, stdev=106.19, samples=10 00:41:36.783 lat (msec) : 4=0.06%, 10=92.88%, 20=0.19%, 50=6.61%, 100=0.25% 00:41:36.783 cpu : usr=95.36%, sys=4.40%, ctx=10, majf=0, minf=118 00:41:36.783 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:36.783 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.783 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:36.783 issued rwts: total=1588,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:36.783 latency : target=0, window=0, percentile=100.00%, depth=3 00:41:36.783 00:41:36.783 Run status group 0 (all jobs): 00:41:36.783 READ: bw=115MiB/s (121MB/s), 36.7MiB/s-39.3MiB/s (38.5MB/s-41.3MB/s), io=582MiB (610MB), run=5043-5047msec 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:41:36.783 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 bdev_null0 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 [2024-12-05 12:25:00.718411] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 bdev_null1 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 bdev_null2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:36.784 { 00:41:36.784 "params": { 00:41:36.784 "name": "Nvme$subsystem", 00:41:36.784 "trtype": "$TEST_TRANSPORT", 00:41:36.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.784 "adrfam": "ipv4", 00:41:36.784 "trsvcid": "$NVMF_PORT", 00:41:36.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.784 "hdgst": ${hdgst:-false}, 00:41:36.784 "ddgst": ${ddgst:-false} 00:41:36.784 }, 00:41:36.784 "method": "bdev_nvme_attach_controller" 00:41:36.784 } 00:41:36.784 EOF 00:41:36.784 )") 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:36.784 { 00:41:36.784 "params": { 00:41:36.784 "name": "Nvme$subsystem", 00:41:36.784 "trtype": "$TEST_TRANSPORT", 00:41:36.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.784 "adrfam": "ipv4", 00:41:36.784 "trsvcid": "$NVMF_PORT", 00:41:36.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.784 "hdgst": ${hdgst:-false}, 00:41:36.784 "ddgst": ${ddgst:-false} 00:41:36.784 }, 00:41:36.784 "method": "bdev_nvme_attach_controller" 00:41:36.784 } 00:41:36.784 EOF 00:41:36.784 )") 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:36.784 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:36.784 { 00:41:36.784 "params": { 00:41:36.784 "name": "Nvme$subsystem", 00:41:36.784 "trtype": "$TEST_TRANSPORT", 00:41:36.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:36.784 "adrfam": "ipv4", 00:41:36.784 "trsvcid": "$NVMF_PORT", 00:41:36.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:36.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:36.784 "hdgst": ${hdgst:-false}, 00:41:36.784 "ddgst": ${ddgst:-false} 00:41:36.784 }, 00:41:36.784 "method": "bdev_nvme_attach_controller" 00:41:36.785 } 00:41:36.785 EOF 00:41:36.785 )") 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:36.785 "params": { 00:41:36.785 "name": "Nvme0", 00:41:36.785 "trtype": "tcp", 00:41:36.785 "traddr": "10.0.0.2", 00:41:36.785 "adrfam": "ipv4", 00:41:36.785 "trsvcid": "4420", 00:41:36.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:36.785 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:36.785 "hdgst": false, 00:41:36.785 "ddgst": false 00:41:36.785 }, 00:41:36.785 "method": "bdev_nvme_attach_controller" 00:41:36.785 },{ 00:41:36.785 "params": { 00:41:36.785 "name": "Nvme1", 00:41:36.785 "trtype": "tcp", 00:41:36.785 "traddr": "10.0.0.2", 00:41:36.785 "adrfam": "ipv4", 00:41:36.785 "trsvcid": "4420", 00:41:36.785 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:36.785 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:36.785 "hdgst": false, 00:41:36.785 "ddgst": false 00:41:36.785 }, 00:41:36.785 "method": "bdev_nvme_attach_controller" 00:41:36.785 },{ 00:41:36.785 "params": { 00:41:36.785 "name": "Nvme2", 00:41:36.785 "trtype": "tcp", 00:41:36.785 "traddr": "10.0.0.2", 00:41:36.785 "adrfam": "ipv4", 00:41:36.785 "trsvcid": "4420", 00:41:36.785 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:41:36.785 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:41:36.785 "hdgst": false, 00:41:36.785 "ddgst": false 00:41:36.785 }, 00:41:36.785 "method": "bdev_nvme_attach_controller" 00:41:36.785 }' 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:36.785 12:25:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:36.785 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.785 ... 00:41:36.785 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.785 ... 00:41:36.785 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:41:36.785 ... 00:41:36.785 fio-3.35 00:41:36.785 Starting 24 threads 00:41:48.993 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673927: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=681, BW=2728KiB/s (2793kB/s)(26.7MiB/10022msec) 00:41:48.994 slat (nsec): min=5706, max=50787, avg=7116.72, stdev=3054.83 00:41:48.994 clat (usec): min=8901, max=52051, avg=23401.80, stdev=2907.66 00:41:48.994 lat (usec): min=8908, max=52084, avg=23408.92, stdev=2907.97 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[13829], 5.00th=[16057], 10.00th=[22676], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[25297], 99.50th=[33817], 99.90th=[52167], 99.95th=[52167], 00:41:48.994 | 99.99th=[52167] 00:41:48.994 bw ( KiB/s): min= 2512, max= 3856, per=4.26%, avg=2726.00, stdev=276.07, samples=20 00:41:48.994 iops : min= 628, max= 964, avg=681.40, stdev=69.05, samples=20 00:41:48.994 lat (msec) : 10=0.09%, 20=9.22%, 50=90.46%, 100=0.23% 00:41:48.994 cpu : usr=98.83%, sys=0.90%, ctx=13, majf=0, minf=114 00:41:48.994 IO depths : 1=5.6%, 2=11.2%, 4=23.2%, 8=53.0%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=93.6%, 8=0.6%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673928: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=674, BW=2698KiB/s (2763kB/s)(26.4MiB/10007msec) 00:41:48.994 slat (nsec): min=5701, max=80774, avg=21621.51, stdev=13694.08 00:41:48.994 clat (usec): min=9578, max=39211, avg=23528.08, stdev=2249.66 00:41:48.994 lat (usec): min=9584, max=39242, avg=23549.71, stdev=2252.55 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[15008], 5.00th=[17695], 10.00th=[23200], 20.00th=[23462], 00:41:48.994 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:48.994 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[30540], 99.50th=[32375], 99.90th=[37487], 99.95th=[39060], 00:41:48.994 | 99.99th=[39060] 00:41:48.994 bw ( KiB/s): min= 2560, max= 3120, per=4.21%, avg=2693.58, stdev=122.95, samples=19 00:41:48.994 iops : min= 640, max= 780, avg=673.37, stdev=30.74, samples=19 00:41:48.994 lat (msec) : 10=0.09%, 20=6.28%, 50=93.63% 00:41:48.994 cpu : usr=98.87%, sys=0.85%, ctx=13, majf=0, minf=57 00:41:48.994 IO depths : 1=5.4%, 2=10.8%, 4=22.3%, 8=54.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=93.4%, 8=1.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6750,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673929: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=663, BW=2655KiB/s (2718kB/s)(25.9MiB/10005msec) 00:41:48.994 slat (nsec): min=5731, max=70494, avg=19478.21, stdev=11553.68 00:41:48.994 clat (usec): min=10767, max=37334, avg=23923.20, stdev=1356.44 00:41:48.994 lat (usec): min=10773, max=37350, avg=23942.68, stdev=1356.44 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[22414], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[25822], 99.50th=[31327], 99.90th=[37487], 99.95th=[37487], 00:41:48.994 | 99.99th=[37487] 00:41:48.994 bw ( KiB/s): min= 2554, max= 2688, per=4.13%, avg=2646.32, stdev=61.00, samples=19 00:41:48.994 iops : min= 638, max= 672, avg=661.47, stdev=15.24, samples=19 00:41:48.994 lat (msec) : 20=0.99%, 50=99.01% 00:41:48.994 cpu : usr=98.33%, sys=1.14%, ctx=83, majf=0, minf=54 00:41:48.994 IO depths : 1=5.7%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673930: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=669, BW=2676KiB/s (2740kB/s)(26.1MiB/10005msec) 00:41:48.994 slat (nsec): min=5713, max=72618, avg=11979.19, stdev=7767.95 00:41:48.994 clat (usec): min=4206, max=32185, avg=23812.58, stdev=1880.43 00:41:48.994 lat (usec): min=4227, max=32196, avg=23824.56, stdev=1879.25 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[ 9896], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[25297], 99.50th=[25822], 99.90th=[26084], 99.95th=[31851], 00:41:48.994 | 99.99th=[32113] 00:41:48.994 bw ( KiB/s): min= 2554, max= 3120, per=4.18%, avg=2676.74, stdev=121.95, samples=19 00:41:48.994 iops : min= 638, max= 780, avg=669.16, stdev=30.52, samples=19 00:41:48.994 lat (msec) : 10=1.03%, 20=0.58%, 50=98.39% 00:41:48.994 cpu : usr=98.82%, sys=0.81%, ctx=81, majf=0, minf=72 00:41:48.994 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673931: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=672, BW=2689KiB/s (2754kB/s)(26.3MiB/10013msec) 00:41:48.994 slat (nsec): min=5720, max=95725, avg=17912.86, stdev=11844.39 00:41:48.994 clat (usec): min=4829, max=40874, avg=23644.74, stdev=2126.05 00:41:48.994 lat (usec): min=4842, max=40881, avg=23662.65, stdev=2125.31 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[10945], 5.00th=[22938], 10.00th=[23462], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[25297], 99.50th=[25560], 99.90th=[37487], 99.95th=[40633], 00:41:48.994 | 99.99th=[40633] 00:41:48.994 bw ( KiB/s): min= 2554, max= 3072, per=4.20%, avg=2692.74, stdev=106.03, samples=19 00:41:48.994 iops : min= 638, max= 768, avg=673.16, stdev=26.54, samples=19 00:41:48.994 lat (msec) : 10=0.86%, 20=2.41%, 50=96.73% 00:41:48.994 cpu : usr=98.88%, sys=0.82%, ctx=41, majf=0, minf=73 00:41:48.994 IO depths : 1=6.0%, 2=12.0%, 4=24.3%, 8=51.2%, 16=6.6%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=93.9%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6732,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673932: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10003msec) 00:41:48.994 slat (nsec): min=5562, max=69903, avg=20235.24, stdev=11299.15 00:41:48.994 clat (usec): min=11865, max=35258, avg=23924.89, stdev=1066.91 00:41:48.994 lat (usec): min=11873, max=35273, avg=23945.13, stdev=1066.99 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[22676], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[25560], 99.50th=[26084], 99.90th=[35390], 99.95th=[35390], 00:41:48.994 | 99.99th=[35390] 00:41:48.994 bw ( KiB/s): min= 2560, max= 2688, per=4.13%, avg=2647.26, stdev=60.92, samples=19 00:41:48.994 iops : min= 640, max= 672, avg=661.79, stdev=15.22, samples=19 00:41:48.994 lat (msec) : 20=0.68%, 50=99.32% 00:41:48.994 cpu : usr=98.77%, sys=0.85%, ctx=74, majf=0, minf=86 00:41:48.994 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673933: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=664, BW=2658KiB/s (2722kB/s)(26.0MiB/10015msec) 00:41:48.994 slat (nsec): min=5719, max=87618, avg=18332.11, stdev=13639.58 00:41:48.994 clat (usec): min=13593, max=31884, avg=23920.82, stdev=1117.97 00:41:48.994 lat (usec): min=13602, max=31890, avg=23939.16, stdev=1117.91 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:41:48.994 | 99.00th=[25822], 99.50th=[28705], 99.90th=[30802], 99.95th=[31589], 00:41:48.994 | 99.99th=[31851] 00:41:48.994 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2659.58, stdev=53.78, samples=19 00:41:48.994 iops : min= 638, max= 672, avg=664.84, stdev=13.49, samples=19 00:41:48.994 lat (msec) : 20=1.79%, 50=98.21% 00:41:48.994 cpu : usr=98.44%, sys=1.04%, ctx=149, majf=0, minf=58 00:41:48.994 IO depths : 1=2.6%, 2=8.8%, 4=25.0%, 8=53.7%, 16=9.9%, 32=0.0%, >=64=0.0% 00:41:48.994 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.994 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.994 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.994 filename0: (groupid=0, jobs=1): err= 0: pid=1673934: Thu Dec 5 12:25:12 2024 00:41:48.994 read: IOPS=681, BW=2725KiB/s (2791kB/s)(26.7MiB/10014msec) 00:41:48.994 slat (nsec): min=5717, max=78626, avg=14404.98, stdev=11944.57 00:41:48.994 clat (usec): min=1010, max=33679, avg=23368.31, stdev=3443.61 00:41:48.994 lat (usec): min=1029, max=33686, avg=23382.72, stdev=3443.12 00:41:48.994 clat percentiles (usec): 00:41:48.994 | 1.00th=[ 1762], 5.00th=[22676], 10.00th=[23462], 20.00th=[23725], 00:41:48.994 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.994 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.994 | 99.00th=[25560], 99.50th=[26870], 99.90th=[29492], 99.95th=[31589], 00:41:48.994 | 99.99th=[33817] 00:41:48.994 bw ( KiB/s): min= 2560, max= 3896, per=4.27%, avg=2731.05, stdev=289.25, samples=19 00:41:48.994 iops : min= 640, max= 974, avg=682.74, stdev=72.32, samples=19 00:41:48.994 lat (msec) : 2=1.07%, 4=0.34%, 10=1.16%, 20=2.11%, 50=95.32% 00:41:48.994 cpu : usr=98.58%, sys=0.96%, ctx=107, majf=0, minf=69 00:41:48.995 IO depths : 1=5.7%, 2=11.8%, 4=24.4%, 8=51.2%, 16=6.8%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673935: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=665, BW=2661KiB/s (2725kB/s)(26.0MiB/10005msec) 00:41:48.995 slat (nsec): min=5751, max=85045, avg=18689.99, stdev=14066.30 00:41:48.995 clat (usec): min=13591, max=29767, avg=23893.92, stdev=977.08 00:41:48.995 lat (usec): min=13600, max=29801, avg=23912.61, stdev=976.67 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[19006], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.995 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.995 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.995 | 99.00th=[25297], 99.50th=[25560], 99.90th=[28967], 99.95th=[28967], 00:41:48.995 | 99.99th=[29754] 00:41:48.995 bw ( KiB/s): min= 2554, max= 2688, per=4.15%, avg=2660.42, stdev=54.11, samples=19 00:41:48.995 iops : min= 638, max= 672, avg=665.05, stdev=13.57, samples=19 00:41:48.995 lat (msec) : 20=1.17%, 50=98.83% 00:41:48.995 cpu : usr=98.68%, sys=0.93%, ctx=74, majf=0, minf=49 00:41:48.995 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673936: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=666, BW=2667KiB/s (2731kB/s)(26.1MiB/10005msec) 00:41:48.995 slat (nsec): min=5715, max=82320, avg=14941.40, stdev=12359.56 00:41:48.995 clat (usec): min=5091, max=35179, avg=23866.63, stdev=2293.47 00:41:48.995 lat (usec): min=5103, max=35189, avg=23881.57, stdev=2292.88 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[11076], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.995 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.995 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:41:48.995 | 99.00th=[32113], 99.50th=[32900], 99.90th=[33424], 99.95th=[34866], 00:41:48.995 | 99.99th=[35390] 00:41:48.995 bw ( KiB/s): min= 2554, max= 3072, per=4.17%, avg=2667.47, stdev=115.31, samples=19 00:41:48.995 iops : min= 638, max= 768, avg=666.84, stdev=28.85, samples=19 00:41:48.995 lat (msec) : 10=0.58%, 20=2.40%, 50=97.02% 00:41:48.995 cpu : usr=98.51%, sys=1.04%, ctx=69, majf=0, minf=82 00:41:48.995 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6672,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673937: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=664, BW=2657KiB/s (2721kB/s)(26.0MiB/10005msec) 00:41:48.995 slat (nsec): min=5705, max=87143, avg=21509.98, stdev=12901.13 00:41:48.995 clat (usec): min=4397, max=52409, avg=23881.40, stdev=1775.90 00:41:48.995 lat (usec): min=4403, max=52427, avg=23902.91, stdev=1776.30 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[18220], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.995 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:48.995 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.995 | 99.00th=[27657], 99.50th=[30802], 99.90th=[44303], 99.95th=[44303], 00:41:48.995 | 99.99th=[52167] 00:41:48.995 bw ( KiB/s): min= 2432, max= 2704, per=4.13%, avg=2642.42, stdev=75.16, samples=19 00:41:48.995 iops : min= 608, max= 676, avg=660.53, stdev=18.84, samples=19 00:41:48.995 lat (msec) : 10=0.24%, 20=1.31%, 50=98.42%, 100=0.03% 00:41:48.995 cpu : usr=99.11%, sys=0.61%, ctx=12, majf=0, minf=44 00:41:48.995 IO depths : 1=5.4%, 2=11.6%, 4=24.7%, 8=51.3%, 16=7.1%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673938: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=663, BW=2653KiB/s (2717kB/s)(25.9MiB/10004msec) 00:41:48.995 slat (nsec): min=5560, max=86269, avg=18386.94, stdev=13989.34 00:41:48.995 clat (usec): min=6010, max=44114, avg=23963.43, stdev=2115.46 00:41:48.995 lat (usec): min=6015, max=44131, avg=23981.82, stdev=2115.84 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[15401], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.995 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.995 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:41:48.995 | 99.00th=[32113], 99.50th=[33424], 99.90th=[44303], 99.95th=[44303], 00:41:48.995 | 99.99th=[44303] 00:41:48.995 bw ( KiB/s): min= 2432, max= 2736, per=4.12%, avg=2638.21, stdev=78.22, samples=19 00:41:48.995 iops : min= 608, max= 684, avg=659.47, stdev=19.54, samples=19 00:41:48.995 lat (msec) : 10=0.39%, 20=1.63%, 50=97.98% 00:41:48.995 cpu : usr=99.09%, sys=0.63%, ctx=11, majf=0, minf=63 00:41:48.995 IO depths : 1=3.4%, 2=7.4%, 4=16.3%, 8=61.8%, 16=11.1%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=92.4%, 8=3.8%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6636,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673939: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=668, BW=2674KiB/s (2738kB/s)(26.1MiB/10005msec) 00:41:48.995 slat (nsec): min=5747, max=97586, avg=13043.08, stdev=8857.88 00:41:48.995 clat (usec): min=7850, max=30408, avg=23823.91, stdev=1745.34 00:41:48.995 lat (usec): min=7870, max=30414, avg=23836.96, stdev=1743.59 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[12780], 5.00th=[23462], 10.00th=[23725], 20.00th=[23725], 00:41:48.995 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.995 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.995 | 99.00th=[25560], 99.50th=[26084], 99.90th=[30278], 99.95th=[30278], 00:41:48.995 | 99.99th=[30278] 00:41:48.995 bw ( KiB/s): min= 2560, max= 2944, per=4.18%, avg=2674.21, stdev=84.16, samples=19 00:41:48.995 iops : min= 640, max= 736, avg=668.53, stdev=21.04, samples=19 00:41:48.995 lat (msec) : 10=0.72%, 20=1.58%, 50=97.70% 00:41:48.995 cpu : usr=98.86%, sys=0.85%, ctx=13, majf=0, minf=58 00:41:48.995 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6688,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673940: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10004msec) 00:41:48.995 slat (nsec): min=5727, max=66271, avg=15516.69, stdev=10357.13 00:41:48.995 clat (usec): min=9754, max=44590, avg=23972.73, stdev=1787.28 00:41:48.995 lat (usec): min=9760, max=44607, avg=23988.25, stdev=1787.36 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[16057], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:41:48.995 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.995 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.995 | 99.00th=[26084], 99.50th=[32637], 99.90th=[44303], 99.95th=[44827], 00:41:48.995 | 99.99th=[44827] 00:41:48.995 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2646.63, stdev=74.04, samples=19 00:41:48.995 iops : min= 608, max= 672, avg=661.58, stdev=18.47, samples=19 00:41:48.995 lat (msec) : 10=0.24%, 20=1.23%, 50=98.52% 00:41:48.995 cpu : usr=99.04%, sys=0.68%, ctx=27, majf=0, minf=49 00:41:48.995 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673941: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=692, BW=2770KiB/s (2837kB/s)(27.1MiB/10015msec) 00:41:48.995 slat (nsec): min=5698, max=79017, avg=13291.89, stdev=11502.03 00:41:48.995 clat (usec): min=4943, max=46050, avg=23031.02, stdev=4609.30 00:41:48.995 lat (usec): min=4953, max=46078, avg=23044.32, stdev=4610.67 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[13173], 5.00th=[15533], 10.00th=[16909], 20.00th=[19530], 00:41:48.995 | 30.00th=[21103], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:48.995 | 70.00th=[23987], 80.00th=[24511], 90.00th=[28181], 95.00th=[31065], 00:41:48.995 | 99.00th=[36963], 99.50th=[39060], 99.90th=[42206], 99.95th=[42206], 00:41:48.995 | 99.99th=[45876] 00:41:48.995 bw ( KiB/s): min= 2576, max= 3110, per=4.33%, avg=2770.10, stdev=144.63, samples=20 00:41:48.995 iops : min= 644, max= 777, avg=692.45, stdev=36.11, samples=20 00:41:48.995 lat (msec) : 10=0.33%, 20=23.88%, 50=75.79% 00:41:48.995 cpu : usr=98.10%, sys=1.27%, ctx=122, majf=0, minf=45 00:41:48.995 IO depths : 1=0.2%, 2=0.8%, 4=6.1%, 8=78.2%, 16=14.7%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=89.3%, 8=6.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename1: (groupid=0, jobs=1): err= 0: pid=1673942: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=670, BW=2683KiB/s (2748kB/s)(26.2MiB/10006msec) 00:41:48.995 slat (nsec): min=5396, max=81843, avg=18323.16, stdev=13704.60 00:41:48.995 clat (usec): min=11382, max=38221, avg=23691.98, stdev=3040.89 00:41:48.995 lat (usec): min=11395, max=38227, avg=23710.30, stdev=3041.74 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[14877], 5.00th=[17171], 10.00th=[20055], 20.00th=[23462], 00:41:48.995 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:48.995 | 70.00th=[23987], 80.00th=[24511], 90.00th=[25297], 95.00th=[29230], 00:41:48.995 | 99.00th=[32637], 99.50th=[33424], 99.90th=[38011], 99.95th=[38011], 00:41:48.995 | 99.99th=[38011] 00:41:48.995 bw ( KiB/s): min= 2432, max= 2896, per=4.20%, avg=2691.05, stdev=114.48, samples=19 00:41:48.995 iops : min= 608, max= 724, avg=672.74, stdev=28.62, samples=19 00:41:48.995 lat (msec) : 20=9.80%, 50=90.20% 00:41:48.995 cpu : usr=98.56%, sys=1.04%, ctx=128, majf=0, minf=59 00:41:48.995 IO depths : 1=3.9%, 2=7.8%, 4=17.0%, 8=61.6%, 16=9.6%, 32=0.0%, >=64=0.0% 00:41:48.995 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 complete : 0=0.0%, 4=92.0%, 8=3.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.995 issued rwts: total=6712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.995 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.995 filename2: (groupid=0, jobs=1): err= 0: pid=1673943: Thu Dec 5 12:25:12 2024 00:41:48.995 read: IOPS=664, BW=2656KiB/s (2720kB/s)(26.0MiB/10005msec) 00:41:48.995 slat (nsec): min=5738, max=74205, avg=21491.93, stdev=13142.62 00:41:48.995 clat (usec): min=8013, max=52534, avg=23916.08, stdev=1773.07 00:41:48.995 lat (usec): min=8034, max=52550, avg=23937.57, stdev=1772.98 00:41:48.995 clat percentiles (usec): 00:41:48.995 | 1.00th=[18744], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.996 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.996 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.996 | 99.00th=[26084], 99.50th=[33424], 99.90th=[44303], 99.95th=[44303], 00:41:48.996 | 99.99th=[52691] 00:41:48.996 bw ( KiB/s): min= 2480, max= 2704, per=4.13%, avg=2642.42, stdev=62.85, samples=19 00:41:48.996 iops : min= 620, max= 676, avg=660.53, stdev=15.76, samples=19 00:41:48.996 lat (msec) : 10=0.21%, 20=1.38%, 50=98.37%, 100=0.03% 00:41:48.996 cpu : usr=98.42%, sys=1.11%, ctx=130, majf=0, minf=59 00:41:48.996 IO depths : 1=1.5%, 2=7.7%, 4=24.7%, 8=55.1%, 16=11.0%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6644,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673944: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=661, BW=2647KiB/s (2710kB/s)(26.0MiB/10044msec) 00:41:48.996 slat (nsec): min=5719, max=70714, avg=18777.87, stdev=11873.62 00:41:48.996 clat (usec): min=7695, max=48428, avg=23948.01, stdev=2096.40 00:41:48.996 lat (usec): min=7716, max=48447, avg=23966.79, stdev=2096.62 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[15270], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.996 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.996 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.996 | 99.00th=[32375], 99.50th=[33817], 99.90th=[43779], 99.95th=[48497], 00:41:48.996 | 99.99th=[48497] 00:41:48.996 bw ( KiB/s): min= 2436, max= 2688, per=4.13%, avg=2642.63, stdev=73.00, samples=19 00:41:48.996 iops : min= 609, max= 672, avg=660.58, stdev=18.30, samples=19 00:41:48.996 lat (msec) : 10=0.15%, 20=1.82%, 50=98.03% 00:41:48.996 cpu : usr=99.06%, sys=0.66%, ctx=13, majf=0, minf=48 00:41:48.996 IO depths : 1=5.0%, 2=11.2%, 4=24.8%, 8=51.5%, 16=7.5%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673945: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=666, BW=2668KiB/s (2732kB/s)(26.1MiB/10012msec) 00:41:48.996 slat (nsec): min=5729, max=82931, avg=19977.70, stdev=12489.00 00:41:48.996 clat (usec): min=7517, max=40123, avg=23797.30, stdev=1619.01 00:41:48.996 lat (usec): min=7526, max=40135, avg=23817.27, stdev=1619.02 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[16581], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.996 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:48.996 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.996 | 99.00th=[25560], 99.50th=[26084], 99.90th=[40109], 99.95th=[40109], 00:41:48.996 | 99.99th=[40109] 00:41:48.996 bw ( KiB/s): min= 2554, max= 2949, per=4.17%, avg=2670.26, stdev=86.81, samples=19 00:41:48.996 iops : min= 638, max= 737, avg=667.53, stdev=21.70, samples=19 00:41:48.996 lat (msec) : 10=0.45%, 20=1.23%, 50=98.32% 00:41:48.996 cpu : usr=98.23%, sys=1.18%, ctx=225, majf=0, minf=76 00:41:48.996 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673946: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=667, BW=2671KiB/s (2735kB/s)(26.1MiB/10005msec) 00:41:48.996 slat (nsec): min=5715, max=80122, avg=21225.18, stdev=14598.52 00:41:48.996 clat (usec): min=7109, max=36935, avg=23771.80, stdev=2004.96 00:41:48.996 lat (usec): min=7146, max=36970, avg=23793.03, stdev=2005.46 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[15139], 5.00th=[22938], 10.00th=[23462], 20.00th=[23462], 00:41:48.996 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.996 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:41:48.996 | 99.00th=[31589], 99.50th=[32637], 99.90th=[35914], 99.95th=[36963], 00:41:48.996 | 99.99th=[36963] 00:41:48.996 bw ( KiB/s): min= 2560, max= 3008, per=4.18%, avg=2677.26, stdev=105.04, samples=19 00:41:48.996 iops : min= 640, max= 752, avg=669.26, stdev=26.26, samples=19 00:41:48.996 lat (msec) : 10=0.24%, 20=3.55%, 50=96.21% 00:41:48.996 cpu : usr=98.46%, sys=0.99%, ctx=107, majf=0, minf=67 00:41:48.996 IO depths : 1=5.7%, 2=11.5%, 4=23.7%, 8=52.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673947: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=663, BW=2655KiB/s (2719kB/s)(25.9MiB/10004msec) 00:41:48.996 slat (nsec): min=5705, max=49814, avg=13261.07, stdev=7422.07 00:41:48.996 clat (usec): min=9810, max=44638, avg=23999.38, stdev=2071.11 00:41:48.996 lat (usec): min=9817, max=44657, avg=24012.64, stdev=2071.38 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[15139], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.996 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.996 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24773], 95.00th=[25035], 00:41:48.996 | 99.00th=[32113], 99.50th=[33817], 99.90th=[44827], 99.95th=[44827], 00:41:48.996 | 99.99th=[44827] 00:41:48.996 bw ( KiB/s): min= 2432, max= 2688, per=4.13%, avg=2646.63, stdev=72.75, samples=19 00:41:48.996 iops : min= 608, max= 672, avg=661.58, stdev=18.15, samples=19 00:41:48.996 lat (msec) : 10=0.24%, 20=2.02%, 50=97.74% 00:41:48.996 cpu : usr=98.61%, sys=1.08%, ctx=72, majf=0, minf=71 00:41:48.996 IO depths : 1=4.5%, 2=10.7%, 4=24.9%, 8=51.9%, 16=8.0%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673948: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=685, BW=2741KiB/s (2806kB/s)(26.8MiB/10004msec) 00:41:48.996 slat (nsec): min=5706, max=73718, avg=14961.85, stdev=10767.55 00:41:48.996 clat (usec): min=8466, max=57055, avg=23242.18, stdev=3336.94 00:41:48.996 lat (usec): min=8474, max=57075, avg=23257.14, stdev=3338.17 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[14615], 5.00th=[16581], 10.00th=[19006], 20.00th=[21627], 00:41:48.996 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23725], 60.00th=[23987], 00:41:48.996 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24773], 95.00th=[27919], 00:41:48.996 | 99.00th=[33817], 99.50th=[36439], 99.90th=[43779], 99.95th=[43779], 00:41:48.996 | 99.99th=[56886] 00:41:48.996 bw ( KiB/s): min= 2549, max= 2970, per=4.27%, avg=2731.95, stdev=107.88, samples=19 00:41:48.996 iops : min= 637, max= 742, avg=682.89, stdev=26.95, samples=19 00:41:48.996 lat (msec) : 10=0.23%, 20=14.91%, 50=84.83%, 100=0.03% 00:41:48.996 cpu : usr=98.81%, sys=0.82%, ctx=86, majf=0, minf=76 00:41:48.996 IO depths : 1=2.7%, 2=5.9%, 4=14.3%, 8=65.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=91.5%, 8=4.3%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673949: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=663, BW=2654KiB/s (2718kB/s)(25.9MiB/10007msec) 00:41:48.996 slat (nsec): min=5818, max=78399, avg=20358.14, stdev=11655.51 00:41:48.996 clat (usec): min=11477, max=35300, avg=23939.27, stdev=1123.71 00:41:48.996 lat (usec): min=11483, max=35316, avg=23959.63, stdev=1124.08 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[20055], 5.00th=[23462], 10.00th=[23462], 20.00th=[23725], 00:41:48.996 | 30.00th=[23725], 40.00th=[23725], 50.00th=[23987], 60.00th=[23987], 00:41:48.996 | 70.00th=[23987], 80.00th=[24249], 90.00th=[24511], 95.00th=[24773], 00:41:48.996 | 99.00th=[25822], 99.50th=[29492], 99.90th=[35390], 99.95th=[35390], 00:41:48.996 | 99.99th=[35390] 00:41:48.996 bw ( KiB/s): min= 2538, max= 2688, per=4.13%, avg=2646.95, stdev=61.73, samples=19 00:41:48.996 iops : min= 634, max= 672, avg=661.68, stdev=15.47, samples=19 00:41:48.996 lat (msec) : 20=1.01%, 50=98.99% 00:41:48.996 cpu : usr=98.59%, sys=1.05%, ctx=169, majf=0, minf=55 00:41:48.996 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 filename2: (groupid=0, jobs=1): err= 0: pid=1673950: Thu Dec 5 12:25:12 2024 00:41:48.996 read: IOPS=663, BW=2655KiB/s (2719kB/s)(26.0MiB/10013msec) 00:41:48.996 slat (nsec): min=5084, max=70988, avg=15950.18, stdev=11578.63 00:41:48.996 clat (usec): min=12053, max=38978, avg=23971.48, stdev=1749.91 00:41:48.996 lat (usec): min=12060, max=38988, avg=23987.43, stdev=1749.90 00:41:48.996 clat percentiles (usec): 00:41:48.996 | 1.00th=[16581], 5.00th=[23200], 10.00th=[23462], 20.00th=[23725], 00:41:48.996 | 30.00th=[23725], 40.00th=[23987], 50.00th=[23987], 60.00th=[23987], 00:41:48.996 | 70.00th=[24249], 80.00th=[24249], 90.00th=[24511], 95.00th=[25035], 00:41:48.996 | 99.00th=[31589], 99.50th=[32637], 99.90th=[39060], 99.95th=[39060], 00:41:48.996 | 99.99th=[39060] 00:41:48.996 bw ( KiB/s): min= 2560, max= 2688, per=4.14%, avg=2650.05, stdev=51.90, samples=19 00:41:48.996 iops : min= 640, max= 672, avg=662.47, stdev=12.98, samples=19 00:41:48.996 lat (msec) : 20=2.54%, 50=97.46% 00:41:48.996 cpu : usr=98.64%, sys=0.90%, ctx=81, majf=0, minf=49 00:41:48.996 IO depths : 1=4.6%, 2=9.8%, 4=22.4%, 8=55.3%, 16=7.9%, 32=0.0%, >=64=0.0% 00:41:48.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 complete : 0=0.0%, 4=93.5%, 8=0.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:48.996 issued rwts: total=6646,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:48.996 latency : target=0, window=0, percentile=100.00%, depth=16 00:41:48.996 00:41:48.996 Run status group 0 (all jobs): 00:41:48.996 READ: bw=62.5MiB/s (65.6MB/s), 2647KiB/s-2770KiB/s (2710kB/s-2837kB/s), io=628MiB (659MB), run=10003-10044msec 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.996 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 bdev_null0 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 [2024-12-05 12:25:12.646614] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 bdev_null1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:48.997 { 00:41:48.997 "params": { 00:41:48.997 "name": "Nvme$subsystem", 00:41:48.997 "trtype": "$TEST_TRANSPORT", 00:41:48.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.997 "adrfam": "ipv4", 00:41:48.997 "trsvcid": "$NVMF_PORT", 00:41:48.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.997 "hdgst": ${hdgst:-false}, 00:41:48.997 "ddgst": ${ddgst:-false} 00:41:48.997 }, 00:41:48.997 "method": "bdev_nvme_attach_controller" 00:41:48.997 } 00:41:48.997 EOF 00:41:48.997 )") 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:48.997 { 00:41:48.997 "params": { 00:41:48.997 "name": "Nvme$subsystem", 00:41:48.997 "trtype": "$TEST_TRANSPORT", 00:41:48.997 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:48.997 "adrfam": "ipv4", 00:41:48.997 "trsvcid": "$NVMF_PORT", 00:41:48.997 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:48.997 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:48.997 "hdgst": ${hdgst:-false}, 00:41:48.997 "ddgst": ${ddgst:-false} 00:41:48.997 }, 00:41:48.997 "method": "bdev_nvme_attach_controller" 00:41:48.997 } 00:41:48.997 EOF 00:41:48.997 )") 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:48.997 "params": { 00:41:48.997 "name": "Nvme0", 00:41:48.997 "trtype": "tcp", 00:41:48.997 "traddr": "10.0.0.2", 00:41:48.997 "adrfam": "ipv4", 00:41:48.997 "trsvcid": "4420", 00:41:48.997 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:48.997 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:48.997 "hdgst": false, 00:41:48.997 "ddgst": false 00:41:48.997 }, 00:41:48.997 "method": "bdev_nvme_attach_controller" 00:41:48.997 },{ 00:41:48.997 "params": { 00:41:48.997 "name": "Nvme1", 00:41:48.997 "trtype": "tcp", 00:41:48.997 "traddr": "10.0.0.2", 00:41:48.997 "adrfam": "ipv4", 00:41:48.997 "trsvcid": "4420", 00:41:48.997 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:48.997 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:48.997 "hdgst": false, 00:41:48.997 "ddgst": false 00:41:48.997 }, 00:41:48.997 "method": "bdev_nvme_attach_controller" 00:41:48.997 }' 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:48.997 12:25:12 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:48.997 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:48.997 ... 00:41:48.997 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:41:48.997 ... 00:41:48.997 fio-3.35 00:41:48.997 Starting 4 threads 00:41:54.277 00:41:54.277 filename0: (groupid=0, jobs=1): err= 0: pid=1676385: Thu Dec 5 12:25:18 2024 00:41:54.277 read: IOPS=2896, BW=22.6MiB/s (23.7MB/s)(113MiB/5002msec) 00:41:54.277 slat (nsec): min=5541, max=66009, avg=6061.61, stdev=1976.00 00:41:54.277 clat (usec): min=1313, max=4938, avg=2744.86, stdev=212.65 00:41:54.277 lat (usec): min=1318, max=4944, avg=2750.93, stdev=212.74 00:41:54.277 clat percentiles (usec): 00:41:54.277 | 1.00th=[ 2311], 5.00th=[ 2507], 10.00th=[ 2540], 20.00th=[ 2671], 00:41:54.277 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:41:54.277 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 2966], 95.00th=[ 2999], 00:41:54.277 | 99.00th=[ 3752], 99.50th=[ 4015], 99.90th=[ 4490], 99.95th=[ 4555], 00:41:54.277 | 99.99th=[ 4948] 00:41:54.277 bw ( KiB/s): min=22749, max=23328, per=24.67%, avg=23174.78, stdev=177.46, samples=9 00:41:54.277 iops : min= 2843, max= 2916, avg=2896.78, stdev=22.37, samples=9 00:41:54.277 lat (msec) : 2=0.06%, 4=99.39%, 10=0.55% 00:41:54.277 cpu : usr=96.72%, sys=3.04%, ctx=6, majf=0, minf=93 00:41:54.277 IO depths : 1=0.1%, 2=0.1%, 4=73.5%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 issued rwts: total=14488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.277 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.277 filename0: (groupid=0, jobs=1): err= 0: pid=1676387: Thu Dec 5 12:25:18 2024 00:41:54.277 read: IOPS=2907, BW=22.7MiB/s (23.8MB/s)(114MiB/5002msec) 00:41:54.277 slat (nsec): min=5538, max=79706, avg=6107.34, stdev=1802.68 00:41:54.277 clat (usec): min=1518, max=4586, avg=2735.08, stdev=207.37 00:41:54.277 lat (usec): min=1524, max=4613, avg=2741.18, stdev=207.48 00:41:54.277 clat percentiles (usec): 00:41:54.277 | 1.00th=[ 2245], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:41:54.277 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2704], 00:41:54.277 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 2933], 95.00th=[ 2966], 00:41:54.277 | 99.00th=[ 3654], 99.50th=[ 4015], 99.90th=[ 4293], 99.95th=[ 4293], 00:41:54.277 | 99.99th=[ 4555] 00:41:54.277 bw ( KiB/s): min=23070, max=23440, per=24.77%, avg=23270.89, stdev=107.23, samples=9 00:41:54.277 iops : min= 2883, max= 2930, avg=2908.78, stdev=13.58, samples=9 00:41:54.277 lat (msec) : 2=0.23%, 4=99.26%, 10=0.51% 00:41:54.277 cpu : usr=96.52%, sys=3.26%, ctx=6, majf=0, minf=59 00:41:54.277 IO depths : 1=0.1%, 2=0.1%, 4=72.1%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 complete : 0=0.0%, 4=92.4%, 8=7.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 issued rwts: total=14542,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.277 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.277 filename1: (groupid=0, jobs=1): err= 0: pid=1676388: Thu Dec 5 12:25:18 2024 00:41:54.277 read: IOPS=2909, BW=22.7MiB/s (23.8MB/s)(114MiB/5003msec) 00:41:54.277 slat (nsec): min=5536, max=66687, avg=6174.00, stdev=2290.86 00:41:54.277 clat (usec): min=1102, max=4569, avg=2732.97, stdev=196.00 00:41:54.277 lat (usec): min=1108, max=4575, avg=2739.14, stdev=196.09 00:41:54.277 clat percentiles (usec): 00:41:54.277 | 1.00th=[ 2245], 5.00th=[ 2474], 10.00th=[ 2540], 20.00th=[ 2671], 00:41:54.277 | 30.00th=[ 2704], 40.00th=[ 2704], 50.00th=[ 2704], 60.00th=[ 2737], 00:41:54.277 | 70.00th=[ 2737], 80.00th=[ 2835], 90.00th=[ 2933], 95.00th=[ 2966], 00:41:54.277 | 99.00th=[ 3425], 99.50th=[ 3851], 99.90th=[ 4293], 99.95th=[ 4359], 00:41:54.277 | 99.99th=[ 4555] 00:41:54.277 bw ( KiB/s): min=23152, max=23472, per=24.81%, avg=23304.89, stdev=117.33, samples=9 00:41:54.277 iops : min= 2894, max= 2934, avg=2913.11, stdev=14.67, samples=9 00:41:54.277 lat (msec) : 2=0.12%, 4=99.51%, 10=0.37% 00:41:54.277 cpu : usr=96.40%, sys=3.38%, ctx=6, majf=0, minf=65 00:41:54.277 IO depths : 1=0.1%, 2=0.1%, 4=70.6%, 8=29.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 complete : 0=0.0%, 4=93.7%, 8=6.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 issued rwts: total=14557,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.277 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.277 filename1: (groupid=0, jobs=1): err= 0: pid=1676389: Thu Dec 5 12:25:18 2024 00:41:54.277 read: IOPS=3030, BW=23.7MiB/s (24.8MB/s)(118MiB/5001msec) 00:41:54.277 slat (nsec): min=5547, max=56485, avg=5968.78, stdev=1186.68 00:41:54.277 clat (usec): min=1371, max=4951, avg=2625.09, stdev=345.68 00:41:54.277 lat (usec): min=1377, max=4957, avg=2631.06, stdev=345.71 00:41:54.277 clat percentiles (usec): 00:41:54.277 | 1.00th=[ 1860], 5.00th=[ 2057], 10.00th=[ 2212], 20.00th=[ 2376], 00:41:54.277 | 30.00th=[ 2474], 40.00th=[ 2671], 50.00th=[ 2704], 60.00th=[ 2704], 00:41:54.277 | 70.00th=[ 2704], 80.00th=[ 2737], 90.00th=[ 2966], 95.00th=[ 3392], 00:41:54.277 | 99.00th=[ 3654], 99.50th=[ 3687], 99.90th=[ 3916], 99.95th=[ 4047], 00:41:54.277 | 99.99th=[ 4948] 00:41:54.277 bw ( KiB/s): min=23824, max=24656, per=25.79%, avg=24229.33, stdev=249.16, samples=9 00:41:54.277 iops : min= 2978, max= 3082, avg=3028.67, stdev=31.14, samples=9 00:41:54.277 lat (msec) : 2=2.27%, 4=97.66%, 10=0.07% 00:41:54.277 cpu : usr=97.10%, sys=2.68%, ctx=5, majf=0, minf=116 00:41:54.277 IO depths : 1=0.1%, 2=0.4%, 4=68.3%, 8=31.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:54.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 complete : 0=0.0%, 4=95.6%, 8=4.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:54.277 issued rwts: total=15155,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:54.277 latency : target=0, window=0, percentile=100.00%, depth=8 00:41:54.277 00:41:54.277 Run status group 0 (all jobs): 00:41:54.277 READ: bw=91.7MiB/s (96.2MB/s), 22.6MiB/s-23.7MiB/s (23.7MB/s-24.8MB/s), io=459MiB (481MB), run=5001-5003msec 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.277 12:25:18 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:54.277 12:25:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.277 12:25:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.277 12:25:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.277 00:41:54.277 real 0m24.481s 00:41:54.277 user 5m16.488s 00:41:54.277 sys 0m4.752s 00:41:54.277 12:25:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:54.277 12:25:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:54.277 ************************************ 00:41:54.277 END TEST fio_dif_rand_params 00:41:54.277 ************************************ 00:41:54.277 12:25:19 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:41:54.277 12:25:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:54.277 12:25:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:54.277 12:25:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:54.277 ************************************ 00:41:54.277 START TEST fio_dif_digest 00:41:54.277 ************************************ 00:41:54.277 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:41:54.277 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:41:54.277 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:41:54.277 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:41:54.277 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:41:54.277 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.278 bdev_null0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:41:54.278 [2024-12-05 12:25:19.140654] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:41:54.278 { 00:41:54.278 "params": { 00:41:54.278 "name": "Nvme$subsystem", 00:41:54.278 "trtype": "$TEST_TRANSPORT", 00:41:54.278 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:54.278 "adrfam": "ipv4", 00:41:54.278 "trsvcid": "$NVMF_PORT", 00:41:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:54.278 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:54.278 "hdgst": ${hdgst:-false}, 00:41:54.278 "ddgst": ${ddgst:-false} 00:41:54.278 }, 00:41:54.278 "method": "bdev_nvme_attach_controller" 00:41:54.278 } 00:41:54.278 EOF 00:41:54.278 )") 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:41:54.278 "params": { 00:41:54.278 "name": "Nvme0", 00:41:54.278 "trtype": "tcp", 00:41:54.278 "traddr": "10.0.0.2", 00:41:54.278 "adrfam": "ipv4", 00:41:54.278 "trsvcid": "4420", 00:41:54.278 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:54.278 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:54.278 "hdgst": true, 00:41:54.278 "ddgst": true 00:41:54.278 }, 00:41:54.278 "method": "bdev_nvme_attach_controller" 00:41:54.278 }' 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:54.278 12:25:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:54.537 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:54.537 ... 00:41:54.537 fio-3.35 00:41:54.537 Starting 3 threads 00:42:06.740 00:42:06.740 filename0: (groupid=0, jobs=1): err= 0: pid=1677661: Thu Dec 5 12:25:30 2024 00:42:06.740 read: IOPS=309, BW=38.7MiB/s (40.6MB/s)(389MiB/10046msec) 00:42:06.740 slat (nsec): min=6013, max=47767, avg=9939.54, stdev=1781.22 00:42:06.740 clat (usec): min=6632, max=50910, avg=9661.47, stdev=1259.87 00:42:06.740 lat (usec): min=6642, max=50918, avg=9671.41, stdev=1259.80 00:42:06.740 clat percentiles (usec): 00:42:06.740 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 8979], 00:42:06.740 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:42:06.740 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10945], 00:42:06.740 | 99.00th=[11469], 99.50th=[11731], 99.90th=[12256], 99.95th=[46924], 00:42:06.740 | 99.99th=[51119] 00:42:06.740 bw ( KiB/s): min=38656, max=40960, per=35.17%, avg=39795.20, stdev=572.28, samples=20 00:42:06.740 iops : min= 302, max= 320, avg=310.90, stdev= 4.47, samples=20 00:42:06.740 lat (msec) : 10=68.53%, 20=31.40%, 50=0.03%, 100=0.03% 00:42:06.740 cpu : usr=95.13%, sys=4.58%, ctx=36, majf=0, minf=173 00:42:06.740 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.740 issued rwts: total=3111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.740 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.740 filename0: (groupid=0, jobs=1): err= 0: pid=1677662: Thu Dec 5 12:25:30 2024 00:42:06.740 read: IOPS=285, BW=35.7MiB/s (37.4MB/s)(357MiB/10005msec) 00:42:06.740 slat (nsec): min=5882, max=37949, avg=7960.80, stdev=1765.74 00:42:06.740 clat (usec): min=4919, max=19581, avg=10499.17, stdev=950.68 00:42:06.740 lat (usec): min=4925, max=19619, avg=10507.13, stdev=950.86 00:42:06.740 clat percentiles (usec): 00:42:06.740 | 1.00th=[ 8029], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:42:06.740 | 30.00th=[10028], 40.00th=[10290], 50.00th=[10552], 60.00th=[10683], 00:42:06.740 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11600], 95.00th=[11994], 00:42:06.740 | 99.00th=[12649], 99.50th=[13173], 99.90th=[17957], 99.95th=[17957], 00:42:06.740 | 99.99th=[19530] 00:42:06.740 bw ( KiB/s): min=35072, max=37376, per=32.30%, avg=36554.11, stdev=624.62, samples=19 00:42:06.740 iops : min= 274, max= 292, avg=285.58, stdev= 4.88, samples=19 00:42:06.740 lat (msec) : 10=26.61%, 20=73.39% 00:42:06.740 cpu : usr=95.41%, sys=4.32%, ctx=22, majf=0, minf=148 00:42:06.740 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.740 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.740 issued rwts: total=2856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.740 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.740 filename0: (groupid=0, jobs=1): err= 0: pid=1677663: Thu Dec 5 12:25:30 2024 00:42:06.740 read: IOPS=290, BW=36.3MiB/s (38.0MB/s)(364MiB/10045msec) 00:42:06.740 slat (nsec): min=5947, max=38624, avg=7894.45, stdev=1502.14 00:42:06.741 clat (usec): min=7282, max=54135, avg=10316.86, stdev=1902.12 00:42:06.741 lat (usec): min=7289, max=54142, avg=10324.76, stdev=1902.08 00:42:06.741 clat percentiles (usec): 00:42:06.741 | 1.00th=[ 8356], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9634], 00:42:06.741 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:42:06.741 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11731], 00:42:06.741 | 99.00th=[12518], 99.50th=[12780], 99.90th=[52167], 99.95th=[53216], 00:42:06.741 | 99.99th=[54264] 00:42:06.741 bw ( KiB/s): min=34048, max=40192, per=32.94%, avg=37273.60, stdev=1051.91, samples=20 00:42:06.741 iops : min= 266, max= 314, avg=291.20, stdev= 8.22, samples=20 00:42:06.741 lat (msec) : 10=38.78%, 20=61.05%, 50=0.07%, 100=0.10% 00:42:06.741 cpu : usr=94.59%, sys=5.14%, ctx=26, majf=0, minf=139 00:42:06.741 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:06.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:06.741 issued rwts: total=2914,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:06.741 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:06.741 00:42:06.741 Run status group 0 (all jobs): 00:42:06.741 READ: bw=111MiB/s (116MB/s), 35.7MiB/s-38.7MiB/s (37.4MB/s-40.6MB/s), io=1110MiB (1164MB), run=10005-10046msec 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:06.741 00:42:06.741 real 0m11.290s 00:42:06.741 user 0m42.524s 00:42:06.741 sys 0m1.744s 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:06.741 12:25:30 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:06.741 ************************************ 00:42:06.741 END TEST fio_dif_digest 00:42:06.741 ************************************ 00:42:06.741 12:25:30 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:06.741 12:25:30 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:06.741 rmmod nvme_tcp 00:42:06.741 rmmod nvme_fabrics 00:42:06.741 rmmod nvme_keyring 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 1667369 ']' 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 1667369 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1667369 ']' 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1667369 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1667369 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1667369' 00:42:06.741 killing process with pid 1667369 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1667369 00:42:06.741 12:25:30 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1667369 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:42:06.741 12:25:30 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:09.285 Waiting for block devices as requested 00:42:09.285 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:09.285 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:09.285 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:09.285 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:09.544 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:09.545 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:09.545 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:09.806 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:09.806 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:09.806 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:10.065 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:10.065 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:10.065 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:10.324 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:10.324 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:10.324 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:10.583 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:10.583 12:25:35 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:42:10.583 12:25:35 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:42:10.583 12:25:35 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:42:10.583 12:25:35 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:10.583 12:25:35 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:42:10.583 12:25:35 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:42:12.522 12:25:37 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:42:12.522 12:25:37 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:42:12.522 12:25:37 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:42:12.522 12:25:37 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:42:12.522 00:42:12.522 real 1m18.205s 00:42:12.522 user 8m0.959s 00:42:12.522 sys 0m21.988s 00:42:12.522 12:25:37 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:12.522 12:25:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:12.522 ************************************ 00:42:12.522 END TEST nvmf_dif 00:42:12.522 ************************************ 00:42:12.522 12:25:37 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.522 12:25:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:12.522 12:25:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:12.523 12:25:37 -- common/autotest_common.sh@10 -- # set +x 00:42:12.783 ************************************ 00:42:12.783 START TEST nvmf_abort_qd_sizes 00:42:12.783 ************************************ 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:12.783 * Looking for test storage... 00:42:12.783 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.783 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:12.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.784 --rc genhtml_branch_coverage=1 00:42:12.784 --rc genhtml_function_coverage=1 00:42:12.784 --rc genhtml_legend=1 00:42:12.784 --rc geninfo_all_blocks=1 00:42:12.784 --rc geninfo_unexecuted_blocks=1 00:42:12.784 00:42:12.784 ' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:12.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.784 --rc genhtml_branch_coverage=1 00:42:12.784 --rc genhtml_function_coverage=1 00:42:12.784 --rc genhtml_legend=1 00:42:12.784 --rc geninfo_all_blocks=1 00:42:12.784 --rc geninfo_unexecuted_blocks=1 00:42:12.784 00:42:12.784 ' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:12.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.784 --rc genhtml_branch_coverage=1 00:42:12.784 --rc genhtml_function_coverage=1 00:42:12.784 --rc genhtml_legend=1 00:42:12.784 --rc geninfo_all_blocks=1 00:42:12.784 --rc geninfo_unexecuted_blocks=1 00:42:12.784 00:42:12.784 ' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:12.784 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.784 --rc genhtml_branch_coverage=1 00:42:12.784 --rc genhtml_function_coverage=1 00:42:12.784 --rc genhtml_legend=1 00:42:12.784 --rc geninfo_all_blocks=1 00:42:12.784 --rc geninfo_unexecuted_blocks=1 00:42:12.784 00:42:12.784 ' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:42:12.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:42:12.784 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:42:13.046 12:25:37 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:42:21.187 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:42:21.187 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:42:21.187 Found net devices under 0000:4b:00.0: cvl_0_0 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:21.187 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:42:21.188 Found net devices under 0000:4b:00.1: cvl_0_1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@247 -- # create_target_ns 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:42:21.188 10.0.0.1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:42:21.188 10.0.0.2 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:42:21.188 12:25:44 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:42:21.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:21.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.573 ms 00:42:21.188 00:42:21.188 --- 10.0.0.1 ping statistics --- 00:42:21.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:21.188 rtt min/avg/max/mdev = 0.573/0.573/0.573/0.000 ms 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:21.188 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:42:21.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:21.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:42:21.189 00:42:21.189 --- 10.0.0.2 ping statistics --- 00:42:21.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:21.189 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:42:21.189 12:25:45 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:23.735 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:23.735 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # get_tcp_target_ip_address 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # get_tcp_target_ip_address target1 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:42:23.735 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_SECOND_TARGET_IP= 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@336 -- # [[ tcp == rdma ]] 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/setup.sh@341 -- # RDMA_IP_LIST='10.0.0.2 00:42:23.736 ' 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=1687108 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 1687108 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1687108 ']' 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:23.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:23.736 12:25:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:23.736 [2024-12-05 12:25:48.770962] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:42:23.736 [2024-12-05 12:25:48.771020] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:23.996 [2024-12-05 12:25:48.868535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:23.996 [2024-12-05 12:25:48.922787] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:23.996 [2024-12-05 12:25:48.922842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:23.996 [2024-12-05 12:25:48.922852] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:23.996 [2024-12-05 12:25:48.922859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:23.996 [2024-12-05 12:25:48.922866] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:23.996 [2024-12-05 12:25:48.924917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:23.996 [2024-12-05 12:25:48.925080] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:23.996 [2024-12-05 12:25:48.925241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:23.996 [2024-12-05 12:25:48.925241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:24.567 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:24.567 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:24.567 12:25:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:42:24.567 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:24.567 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:65:00.0 ]] 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:65:00.0 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:24.827 12:25:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:24.827 ************************************ 00:42:24.827 START TEST spdk_target_abort 00:42:24.827 ************************************ 00:42:24.827 12:25:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:24.827 12:25:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:24.827 12:25:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:42:24.828 12:25:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:24.828 12:25:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:25.087 spdk_targetn1 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:25.087 [2024-12-05 12:25:50.009388] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:25.087 [2024-12-05 12:25:50.057700] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:25.087 12:25:50 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:25.346 [2024-12-05 12:25:50.217965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:504 len:8 PRP1 0x200004ac2000 PRP2 0x0 00:42:25.346 [2024-12-05 12:25:50.217999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0040 p:1 m:0 dnr:0 00:42:25.346 [2024-12-05 12:25:50.257920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:1784 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:25.346 [2024-12-05 12:25:50.257944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:00e0 p:1 m:0 dnr:0 00:42:25.346 [2024-12-05 12:25:50.273952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:189 nsid:1 lba:2312 len:8 PRP1 0x200004ac0000 PRP2 0x0 00:42:25.346 [2024-12-05 12:25:50.273973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:189 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:42:25.346 [2024-12-05 12:25:50.297976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3104 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:25.346 [2024-12-05 12:25:50.297996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0086 p:0 m:0 dnr:0 00:42:25.346 [2024-12-05 12:25:50.305015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:3328 len:8 PRP1 0x200004ac8000 PRP2 0x0 00:42:25.346 [2024-12-05 12:25:50.305034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:00a2 p:0 m:0 dnr:0 00:42:25.346 [2024-12-05 12:25:50.306157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:3392 len:8 PRP1 0x200004ac4000 PRP2 0x0 00:42:25.346 [2024-12-05 12:25:50.306173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:00a9 p:0 m:0 dnr:0 00:42:28.640 Initializing NVMe Controllers 00:42:28.640 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:28.640 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:28.640 Initialization complete. Launching workers. 00:42:28.640 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11957, failed: 6 00:42:28.640 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2707, failed to submit 9256 00:42:28.640 success 754, unsuccessful 1953, failed 0 00:42:28.640 12:25:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:28.640 12:25:53 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:28.640 [2024-12-05 12:25:53.505533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:688 len:8 PRP1 0x200004e54000 PRP2 0x0 00:42:28.640 [2024-12-05 12:25:53.505569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:005b p:1 m:0 dnr:0 00:42:28.640 [2024-12-05 12:25:53.556598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2080 len:8 PRP1 0x200004e4c000 PRP2 0x0 00:42:28.640 [2024-12-05 12:25:53.556625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:42:28.640 [2024-12-05 12:25:53.580618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:181 nsid:1 lba:2616 len:8 PRP1 0x200004e50000 PRP2 0x0 00:42:28.641 [2024-12-05 12:25:53.580641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:181 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:42:28.641 [2024-12-05 12:25:53.612574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:188 nsid:1 lba:3448 len:8 PRP1 0x200004e40000 PRP2 0x0 00:42:28.641 [2024-12-05 12:25:53.612597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:188 cdw0:0 sqhd:00b3 p:0 m:0 dnr:0 00:42:31.929 Initializing NVMe Controllers 00:42:31.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:31.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:31.929 Initialization complete. Launching workers. 00:42:31.929 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8606, failed: 4 00:42:31.929 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1236, failed to submit 7374 00:42:31.929 success 316, unsuccessful 920, failed 0 00:42:31.929 12:25:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:31.929 12:25:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:35.222 Initializing NVMe Controllers 00:42:35.222 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:35.222 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:35.222 Initialization complete. Launching workers. 00:42:35.222 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 45064, failed: 0 00:42:35.222 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2431, failed to submit 42633 00:42:35.222 success 592, unsuccessful 1839, failed 0 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:35.222 12:25:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1687108 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1687108 ']' 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1687108 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1687108 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1687108' 00:42:37.148 killing process with pid 1687108 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1687108 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1687108 00:42:37.148 00:42:37.148 real 0m12.194s 00:42:37.148 user 0m49.660s 00:42:37.148 sys 0m2.040s 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:37.148 ************************************ 00:42:37.148 END TEST spdk_target_abort 00:42:37.148 ************************************ 00:42:37.148 12:26:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:37.148 12:26:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:37.148 12:26:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:37.148 12:26:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:37.148 ************************************ 00:42:37.148 START TEST kernel_target_abort 00:42:37.148 ************************************ 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:42:37.148 12:26:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:42:37.148 12:26:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:37.148 12:26:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:40.443 Waiting for block devices as requested 00:42:40.443 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:40.443 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:40.443 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:40.701 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:40.701 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:40.701 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:40.959 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:40.959 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:40.959 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:42:41.220 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:42:41.220 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:42:41.220 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:42:41.480 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:42:41.480 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:42:41.480 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:42:41.739 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:42:41.739 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:41.739 No valid GPT data, bailing 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:42:41.739 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:42:42.079 00:42:42.079 Discovery Log Number of Records 2, Generation counter 2 00:42:42.079 =====Discovery Log Entry 0====== 00:42:42.079 trtype: tcp 00:42:42.079 adrfam: ipv4 00:42:42.079 subtype: current discovery subsystem 00:42:42.079 treq: not specified, sq flow control disable supported 00:42:42.079 portid: 1 00:42:42.079 trsvcid: 4420 00:42:42.079 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:42.079 traddr: 10.0.0.1 00:42:42.079 eflags: none 00:42:42.079 sectype: none 00:42:42.079 =====Discovery Log Entry 1====== 00:42:42.079 trtype: tcp 00:42:42.079 adrfam: ipv4 00:42:42.079 subtype: nvme subsystem 00:42:42.079 treq: not specified, sq flow control disable supported 00:42:42.079 portid: 1 00:42:42.079 trsvcid: 4420 00:42:42.079 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:42.079 traddr: 10.0.0.1 00:42:42.079 eflags: none 00:42:42.079 sectype: none 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:42.079 12:26:06 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:45.441 Initializing NVMe Controllers 00:42:45.441 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:45.441 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:45.441 Initialization complete. Launching workers. 00:42:45.441 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67648, failed: 0 00:42:45.441 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 67648, failed to submit 0 00:42:45.441 success 0, unsuccessful 67648, failed 0 00:42:45.441 12:26:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:45.441 12:26:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:47.983 Initializing NVMe Controllers 00:42:47.983 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:47.983 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:47.983 Initialization complete. Launching workers. 00:42:47.983 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 116458, failed: 0 00:42:47.983 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 29318, failed to submit 87140 00:42:47.983 success 0, unsuccessful 29318, failed 0 00:42:47.983 12:26:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:47.983 12:26:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:51.274 Initializing NVMe Controllers 00:42:51.274 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:42:51.274 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:51.274 Initialization complete. Launching workers. 00:42:51.274 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146382, failed: 0 00:42:51.274 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36614, failed to submit 109768 00:42:51.274 success 0, unsuccessful 36614, failed 0 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:42:51.274 12:26:16 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:54.572 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:54.572 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:54.572 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:54.572 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:54.572 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:42:54.832 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:42:56.741 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:42:56.741 00:42:56.741 real 0m19.587s 00:42:56.741 user 0m9.752s 00:42:56.741 sys 0m5.605s 00:42:56.741 12:26:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:56.741 12:26:21 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:56.741 ************************************ 00:42:56.741 END TEST kernel_target_abort 00:42:56.741 ************************************ 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:42:56.741 rmmod nvme_tcp 00:42:56.741 rmmod nvme_fabrics 00:42:56.741 rmmod nvme_keyring 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 1687108 ']' 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 1687108 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1687108 ']' 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1687108 00:42:56.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1687108) - No such process 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1687108 is not found' 00:42:56.741 Process with pid 1687108 is not found 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:42:56.741 12:26:21 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:00.034 Waiting for block devices as requested 00:43:00.034 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:00.034 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:00.293 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:00.293 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:00.293 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:00.293 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:00.553 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:00.553 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:00.553 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:43:00.812 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:43:00.812 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:43:00.812 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:43:01.072 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:43:01.072 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:43:01.072 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:43:01.332 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:43:01.332 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:43:01.332 12:26:26 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:43:01.332 12:26:26 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:43:01.332 12:26:26 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:43:01.332 12:26:26 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:43:01.332 12:26:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:43:01.332 12:26:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:43:03.873 00:43:03.873 real 0m50.785s 00:43:03.873 user 1m4.599s 00:43:03.873 sys 0m18.204s 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:03.873 12:26:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:03.873 ************************************ 00:43:03.873 END TEST nvmf_abort_qd_sizes 00:43:03.873 ************************************ 00:43:03.873 12:26:28 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:03.873 12:26:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:03.873 12:26:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:03.873 12:26:28 -- common/autotest_common.sh@10 -- # set +x 00:43:03.873 ************************************ 00:43:03.873 START TEST keyring_file 00:43:03.873 ************************************ 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:03.873 * Looking for test storage... 00:43:03.873 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:03.873 12:26:28 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:03.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.873 --rc genhtml_branch_coverage=1 00:43:03.873 --rc genhtml_function_coverage=1 00:43:03.873 --rc genhtml_legend=1 00:43:03.873 --rc geninfo_all_blocks=1 00:43:03.873 --rc geninfo_unexecuted_blocks=1 00:43:03.873 00:43:03.873 ' 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:03.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.873 --rc genhtml_branch_coverage=1 00:43:03.873 --rc genhtml_function_coverage=1 00:43:03.873 --rc genhtml_legend=1 00:43:03.873 --rc geninfo_all_blocks=1 00:43:03.873 --rc geninfo_unexecuted_blocks=1 00:43:03.873 00:43:03.873 ' 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:03.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.873 --rc genhtml_branch_coverage=1 00:43:03.873 --rc genhtml_function_coverage=1 00:43:03.873 --rc genhtml_legend=1 00:43:03.873 --rc geninfo_all_blocks=1 00:43:03.873 --rc geninfo_unexecuted_blocks=1 00:43:03.873 00:43:03.873 ' 00:43:03.873 12:26:28 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:03.873 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:03.873 --rc genhtml_branch_coverage=1 00:43:03.873 --rc genhtml_function_coverage=1 00:43:03.873 --rc genhtml_legend=1 00:43:03.873 --rc geninfo_all_blocks=1 00:43:03.873 --rc geninfo_unexecuted_blocks=1 00:43:03.873 00:43:03.873 ' 00:43:03.873 12:26:28 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:03.873 12:26:28 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:03.873 12:26:28 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:03.873 12:26:28 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:03.873 12:26:28 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:03.873 12:26:28 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:03.873 12:26:28 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:03.873 12:26:28 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:03.874 12:26:28 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:03.874 12:26:28 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:03.874 12:26:28 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:03.874 12:26:28 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:03.874 12:26:28 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.874 12:26:28 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.874 12:26:28 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.874 12:26:28 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:03.874 12:26:28 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:43:03.874 12:26:28 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:03.874 12:26:28 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:03.874 12:26:28 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@50 -- # : 0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:03.874 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z2AqLXhnKj 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@507 -- # python - 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z2AqLXhnKj 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z2AqLXhnKj 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.z2AqLXhnKj 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.oGKLDVKmbe 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:43:03.874 12:26:28 keyring_file -- nvmf/common.sh@507 -- # python - 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.oGKLDVKmbe 00:43:03.874 12:26:28 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.oGKLDVKmbe 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.oGKLDVKmbe 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@30 -- # tgtpid=1697020 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1697020 00:43:03.874 12:26:28 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:03.874 12:26:28 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1697020 ']' 00:43:03.874 12:26:28 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:03.874 12:26:28 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:03.874 12:26:28 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:03.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:03.874 12:26:28 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:03.874 12:26:28 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:03.874 [2024-12-05 12:26:28.872917] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:03.874 [2024-12-05 12:26:28.872991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697020 ] 00:43:04.135 [2024-12-05 12:26:28.965797] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.135 [2024-12-05 12:26:29.019620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:04.707 12:26:29 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:04.707 [2024-12-05 12:26:29.681783] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:04.707 null0 00:43:04.707 [2024-12-05 12:26:29.713834] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:04.707 [2024-12-05 12:26:29.714124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:04.707 12:26:29 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:04.707 [2024-12-05 12:26:29.745902] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:04.707 request: 00:43:04.707 { 00:43:04.707 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:04.707 "secure_channel": false, 00:43:04.707 "listen_address": { 00:43:04.707 "trtype": "tcp", 00:43:04.707 "traddr": "127.0.0.1", 00:43:04.707 "trsvcid": "4420" 00:43:04.707 }, 00:43:04.707 "method": "nvmf_subsystem_add_listener", 00:43:04.707 "req_id": 1 00:43:04.707 } 00:43:04.707 Got JSON-RPC error response 00:43:04.707 response: 00:43:04.707 { 00:43:04.707 "code": -32602, 00:43:04.707 "message": "Invalid parameters" 00:43:04.707 } 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:04.707 12:26:29 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:04.707 12:26:29 keyring_file -- keyring/file.sh@47 -- # bperfpid=1697333 00:43:04.968 12:26:29 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1697333 /var/tmp/bperf.sock 00:43:04.968 12:26:29 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:04.968 12:26:29 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1697333 ']' 00:43:04.968 12:26:29 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:04.968 12:26:29 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:04.968 12:26:29 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:04.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:04.968 12:26:29 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:04.968 12:26:29 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:04.968 [2024-12-05 12:26:29.803581] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:04.968 [2024-12-05 12:26:29.803630] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1697333 ] 00:43:04.968 [2024-12-05 12:26:29.891065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:04.968 [2024-12-05 12:26:29.927691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:05.540 12:26:30 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:05.540 12:26:30 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:05.540 12:26:30 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:05.540 12:26:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:05.801 12:26:30 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oGKLDVKmbe 00:43:05.801 12:26:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oGKLDVKmbe 00:43:06.061 12:26:30 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:06.061 12:26:30 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:06.061 12:26:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.061 12:26:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.061 12:26:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:06.322 12:26:31 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.z2AqLXhnKj == \/\t\m\p\/\t\m\p\.\z\2\A\q\L\X\h\n\K\j ]] 00:43:06.322 12:26:31 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:06.322 12:26:31 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:06.322 12:26:31 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.oGKLDVKmbe == \/\t\m\p\/\t\m\p\.\o\G\K\L\D\V\K\m\b\e ]] 00:43:06.322 12:26:31 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.322 12:26:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:06.582 12:26:31 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:06.582 12:26:31 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:06.582 12:26:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:06.582 12:26:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:06.582 12:26:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:06.582 12:26:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:06.582 12:26:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:06.843 12:26:31 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:06.843 12:26:31 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:06.843 12:26:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:07.102 [2024-12-05 12:26:31.924252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:07.102 nvme0n1 00:43:07.102 12:26:32 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:07.102 12:26:32 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:07.102 12:26:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:07.102 12:26:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.103 12:26:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:07.103 12:26:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.363 12:26:32 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:07.363 12:26:32 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:07.363 12:26:32 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:07.363 12:26:32 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:07.363 12:26:32 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:07.363 12:26:32 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:07.363 12:26:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:07.363 12:26:32 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:07.363 12:26:32 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:07.624 Running I/O for 1 seconds... 00:43:08.564 17689.00 IOPS, 69.10 MiB/s 00:43:08.564 Latency(us) 00:43:08.564 [2024-12-05T11:26:33.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:08.564 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:08.564 nvme0n1 : 1.00 17750.64 69.34 0.00 0.00 7197.64 2362.03 20316.16 00:43:08.564 [2024-12-05T11:26:33.613Z] =================================================================================================================== 00:43:08.564 [2024-12-05T11:26:33.613Z] Total : 17750.64 69.34 0.00 0.00 7197.64 2362.03 20316.16 00:43:08.564 { 00:43:08.564 "results": [ 00:43:08.564 { 00:43:08.564 "job": "nvme0n1", 00:43:08.564 "core_mask": "0x2", 00:43:08.564 "workload": "randrw", 00:43:08.564 "percentage": 50, 00:43:08.564 "status": "finished", 00:43:08.564 "queue_depth": 128, 00:43:08.564 "io_size": 4096, 00:43:08.564 "runtime": 1.003851, 00:43:08.564 "iops": 17750.64227659284, 00:43:08.564 "mibps": 69.33844639294078, 00:43:08.564 "io_failed": 0, 00:43:08.564 "io_timeout": 0, 00:43:08.564 "avg_latency_us": 7197.635985558486, 00:43:08.564 "min_latency_us": 2362.0266666666666, 00:43:08.564 "max_latency_us": 20316.16 00:43:08.564 } 00:43:08.564 ], 00:43:08.564 "core_count": 1 00:43:08.564 } 00:43:08.564 12:26:33 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:08.564 12:26:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:08.825 12:26:33 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:08.825 12:26:33 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:08.825 12:26:33 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:08.825 12:26:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.085 12:26:34 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:09.085 12:26:34 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:09.085 12:26:34 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:09.085 12:26:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:09.346 [2024-12-05 12:26:34.206709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:09.346 [2024-12-05 12:26:34.207398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e57c50 (107): Transport endpoint is not connected 00:43:09.346 [2024-12-05 12:26:34.208394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e57c50 (9): Bad file descriptor 00:43:09.346 [2024-12-05 12:26:34.209396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:09.346 [2024-12-05 12:26:34.209405] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:09.346 [2024-12-05 12:26:34.209411] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:09.346 [2024-12-05 12:26:34.209418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:09.346 request: 00:43:09.346 { 00:43:09.346 "name": "nvme0", 00:43:09.346 "trtype": "tcp", 00:43:09.346 "traddr": "127.0.0.1", 00:43:09.346 "adrfam": "ipv4", 00:43:09.346 "trsvcid": "4420", 00:43:09.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:09.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:09.346 "prchk_reftag": false, 00:43:09.346 "prchk_guard": false, 00:43:09.346 "hdgst": false, 00:43:09.346 "ddgst": false, 00:43:09.346 "psk": "key1", 00:43:09.346 "allow_unrecognized_csi": false, 00:43:09.346 "method": "bdev_nvme_attach_controller", 00:43:09.346 "req_id": 1 00:43:09.346 } 00:43:09.346 Got JSON-RPC error response 00:43:09.346 response: 00:43:09.346 { 00:43:09.346 "code": -5, 00:43:09.346 "message": "Input/output error" 00:43:09.346 } 00:43:09.346 12:26:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:09.346 12:26:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:09.346 12:26:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:09.346 12:26:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:09.346 12:26:34 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:09.346 12:26:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:09.346 12:26:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.346 12:26:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.346 12:26:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:09.346 12:26:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.606 12:26:34 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:09.606 12:26:34 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:09.606 12:26:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:09.606 12:26:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:09.606 12:26:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:09.606 12:26:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:09.606 12:26:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:09.606 12:26:34 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:09.606 12:26:34 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:09.606 12:26:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:09.867 12:26:34 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:09.867 12:26:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:10.146 12:26:34 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:10.146 12:26:34 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:10.146 12:26:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.146 12:26:35 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:10.146 12:26:35 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.z2AqLXhnKj 00:43:10.146 12:26:35 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:10.146 12:26:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:10.146 12:26:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:10.406 [2024-12-05 12:26:35.278253] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.z2AqLXhnKj': 0100660 00:43:10.406 [2024-12-05 12:26:35.278274] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:10.406 request: 00:43:10.406 { 00:43:10.406 "name": "key0", 00:43:10.406 "path": "/tmp/tmp.z2AqLXhnKj", 00:43:10.406 "method": "keyring_file_add_key", 00:43:10.406 "req_id": 1 00:43:10.406 } 00:43:10.406 Got JSON-RPC error response 00:43:10.406 response: 00:43:10.406 { 00:43:10.406 "code": -1, 00:43:10.406 "message": "Operation not permitted" 00:43:10.406 } 00:43:10.407 12:26:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:10.407 12:26:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:10.407 12:26:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:10.407 12:26:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:10.407 12:26:35 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.z2AqLXhnKj 00:43:10.407 12:26:35 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:10.407 12:26:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z2AqLXhnKj 00:43:10.407 12:26:35 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.z2AqLXhnKj 00:43:10.407 12:26:35 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:10.667 12:26:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:10.667 12:26:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:10.667 12:26:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:10.667 12:26:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:10.667 12:26:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:10.667 12:26:35 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:10.667 12:26:35 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:10.667 12:26:35 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.667 12:26:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:10.928 [2024-12-05 12:26:35.783549] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.z2AqLXhnKj': No such file or directory 00:43:10.928 [2024-12-05 12:26:35.783563] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:10.928 [2024-12-05 12:26:35.783577] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:10.928 [2024-12-05 12:26:35.783583] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:10.928 [2024-12-05 12:26:35.783589] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:10.928 [2024-12-05 12:26:35.783594] bdev_nvme.c:6796:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:10.928 request: 00:43:10.928 { 00:43:10.928 "name": "nvme0", 00:43:10.928 "trtype": "tcp", 00:43:10.928 "traddr": "127.0.0.1", 00:43:10.928 "adrfam": "ipv4", 00:43:10.928 "trsvcid": "4420", 00:43:10.928 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:10.928 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:10.928 "prchk_reftag": false, 00:43:10.928 "prchk_guard": false, 00:43:10.928 "hdgst": false, 00:43:10.928 "ddgst": false, 00:43:10.928 "psk": "key0", 00:43:10.928 "allow_unrecognized_csi": false, 00:43:10.928 "method": "bdev_nvme_attach_controller", 00:43:10.928 "req_id": 1 00:43:10.928 } 00:43:10.928 Got JSON-RPC error response 00:43:10.928 response: 00:43:10.928 { 00:43:10.928 "code": -19, 00:43:10.928 "message": "No such device" 00:43:10.928 } 00:43:10.928 12:26:35 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:10.928 12:26:35 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:10.928 12:26:35 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:10.928 12:26:35 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:10.928 12:26:35 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:10.928 12:26:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:11.189 12:26:35 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:11.189 12:26:35 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:11.189 12:26:35 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:11.189 12:26:35 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:11.189 12:26:35 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:11.189 12:26:35 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:11.189 12:26:36 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Ltd0OKlApA 00:43:11.189 12:26:36 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:11.190 12:26:36 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:11.190 12:26:36 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:43:11.190 12:26:36 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:11.190 12:26:36 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:43:11.190 12:26:36 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:43:11.190 12:26:36 keyring_file -- nvmf/common.sh@507 -- # python - 00:43:11.190 12:26:36 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Ltd0OKlApA 00:43:11.190 12:26:36 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Ltd0OKlApA 00:43:11.190 12:26:36 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Ltd0OKlApA 00:43:11.190 12:26:36 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ltd0OKlApA 00:43:11.190 12:26:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ltd0OKlApA 00:43:11.190 12:26:36 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:11.190 12:26:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:11.451 nvme0n1 00:43:11.451 12:26:36 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:11.451 12:26:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:11.451 12:26:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.451 12:26:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.451 12:26:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.451 12:26:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.711 12:26:36 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:11.711 12:26:36 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:11.711 12:26:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:11.973 12:26:36 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:11.973 12:26:36 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.973 12:26:36 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:11.973 12:26:36 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:11.973 12:26:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:12.233 12:26:37 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:12.233 12:26:37 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:12.233 12:26:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:12.492 12:26:37 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:12.492 12:26:37 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:12.492 12:26:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:12.492 12:26:37 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:12.492 12:26:37 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Ltd0OKlApA 00:43:12.492 12:26:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Ltd0OKlApA 00:43:12.752 12:26:37 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.oGKLDVKmbe 00:43:12.752 12:26:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.oGKLDVKmbe 00:43:13.013 12:26:37 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.013 12:26:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:13.013 nvme0n1 00:43:13.273 12:26:38 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:13.273 12:26:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:13.534 12:26:38 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:13.534 "subsystems": [ 00:43:13.534 { 00:43:13.534 "subsystem": "keyring", 00:43:13.534 "config": [ 00:43:13.534 { 00:43:13.534 "method": "keyring_file_add_key", 00:43:13.534 "params": { 00:43:13.534 "name": "key0", 00:43:13.534 "path": "/tmp/tmp.Ltd0OKlApA" 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "keyring_file_add_key", 00:43:13.534 "params": { 00:43:13.534 "name": "key1", 00:43:13.534 "path": "/tmp/tmp.oGKLDVKmbe" 00:43:13.534 } 00:43:13.534 } 00:43:13.534 ] 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "subsystem": "iobuf", 00:43:13.534 "config": [ 00:43:13.534 { 00:43:13.534 "method": "iobuf_set_options", 00:43:13.534 "params": { 00:43:13.534 "small_pool_count": 8192, 00:43:13.534 "large_pool_count": 1024, 00:43:13.534 "small_bufsize": 8192, 00:43:13.534 "large_bufsize": 135168, 00:43:13.534 "enable_numa": false 00:43:13.534 } 00:43:13.534 } 00:43:13.534 ] 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "subsystem": "sock", 00:43:13.534 "config": [ 00:43:13.534 { 00:43:13.534 "method": "sock_set_default_impl", 00:43:13.534 "params": { 00:43:13.534 "impl_name": "posix" 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "sock_impl_set_options", 00:43:13.534 "params": { 00:43:13.534 "impl_name": "ssl", 00:43:13.534 "recv_buf_size": 4096, 00:43:13.534 "send_buf_size": 4096, 00:43:13.534 "enable_recv_pipe": true, 00:43:13.534 "enable_quickack": false, 00:43:13.534 "enable_placement_id": 0, 00:43:13.534 "enable_zerocopy_send_server": true, 00:43:13.534 "enable_zerocopy_send_client": false, 00:43:13.534 "zerocopy_threshold": 0, 00:43:13.534 "tls_version": 0, 00:43:13.534 "enable_ktls": false 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "sock_impl_set_options", 00:43:13.534 "params": { 00:43:13.534 "impl_name": "posix", 00:43:13.534 "recv_buf_size": 2097152, 00:43:13.534 "send_buf_size": 2097152, 00:43:13.534 "enable_recv_pipe": true, 00:43:13.534 "enable_quickack": false, 00:43:13.534 "enable_placement_id": 0, 00:43:13.534 "enable_zerocopy_send_server": true, 00:43:13.534 "enable_zerocopy_send_client": false, 00:43:13.534 "zerocopy_threshold": 0, 00:43:13.534 "tls_version": 0, 00:43:13.534 "enable_ktls": false 00:43:13.534 } 00:43:13.534 } 00:43:13.534 ] 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "subsystem": "vmd", 00:43:13.534 "config": [] 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "subsystem": "accel", 00:43:13.534 "config": [ 00:43:13.534 { 00:43:13.534 "method": "accel_set_options", 00:43:13.534 "params": { 00:43:13.534 "small_cache_size": 128, 00:43:13.534 "large_cache_size": 16, 00:43:13.534 "task_count": 2048, 00:43:13.534 "sequence_count": 2048, 00:43:13.534 "buf_count": 2048 00:43:13.534 } 00:43:13.534 } 00:43:13.534 ] 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "subsystem": "bdev", 00:43:13.534 "config": [ 00:43:13.534 { 00:43:13.534 "method": "bdev_set_options", 00:43:13.534 "params": { 00:43:13.534 "bdev_io_pool_size": 65535, 00:43:13.534 "bdev_io_cache_size": 256, 00:43:13.534 "bdev_auto_examine": true, 00:43:13.534 "iobuf_small_cache_size": 128, 00:43:13.534 "iobuf_large_cache_size": 16 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "bdev_raid_set_options", 00:43:13.534 "params": { 00:43:13.534 "process_window_size_kb": 1024, 00:43:13.534 "process_max_bandwidth_mb_sec": 0 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "bdev_iscsi_set_options", 00:43:13.534 "params": { 00:43:13.534 "timeout_sec": 30 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "bdev_nvme_set_options", 00:43:13.534 "params": { 00:43:13.534 "action_on_timeout": "none", 00:43:13.534 "timeout_us": 0, 00:43:13.534 "timeout_admin_us": 0, 00:43:13.534 "keep_alive_timeout_ms": 10000, 00:43:13.534 "arbitration_burst": 0, 00:43:13.534 "low_priority_weight": 0, 00:43:13.534 "medium_priority_weight": 0, 00:43:13.534 "high_priority_weight": 0, 00:43:13.534 "nvme_adminq_poll_period_us": 10000, 00:43:13.534 "nvme_ioq_poll_period_us": 0, 00:43:13.534 "io_queue_requests": 512, 00:43:13.534 "delay_cmd_submit": true, 00:43:13.534 "transport_retry_count": 4, 00:43:13.534 "bdev_retry_count": 3, 00:43:13.534 "transport_ack_timeout": 0, 00:43:13.534 "ctrlr_loss_timeout_sec": 0, 00:43:13.534 "reconnect_delay_sec": 0, 00:43:13.534 "fast_io_fail_timeout_sec": 0, 00:43:13.534 "disable_auto_failback": false, 00:43:13.534 "generate_uuids": false, 00:43:13.534 "transport_tos": 0, 00:43:13.534 "nvme_error_stat": false, 00:43:13.534 "rdma_srq_size": 0, 00:43:13.534 "io_path_stat": false, 00:43:13.534 "allow_accel_sequence": false, 00:43:13.534 "rdma_max_cq_size": 0, 00:43:13.534 "rdma_cm_event_timeout_ms": 0, 00:43:13.534 "dhchap_digests": [ 00:43:13.534 "sha256", 00:43:13.534 "sha384", 00:43:13.534 "sha512" 00:43:13.534 ], 00:43:13.534 "dhchap_dhgroups": [ 00:43:13.534 "null", 00:43:13.534 "ffdhe2048", 00:43:13.534 "ffdhe3072", 00:43:13.534 "ffdhe4096", 00:43:13.534 "ffdhe6144", 00:43:13.534 "ffdhe8192" 00:43:13.534 ] 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "bdev_nvme_attach_controller", 00:43:13.534 "params": { 00:43:13.534 "name": "nvme0", 00:43:13.534 "trtype": "TCP", 00:43:13.534 "adrfam": "IPv4", 00:43:13.534 "traddr": "127.0.0.1", 00:43:13.534 "trsvcid": "4420", 00:43:13.534 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.534 "prchk_reftag": false, 00:43:13.534 "prchk_guard": false, 00:43:13.534 "ctrlr_loss_timeout_sec": 0, 00:43:13.534 "reconnect_delay_sec": 0, 00:43:13.534 "fast_io_fail_timeout_sec": 0, 00:43:13.534 "psk": "key0", 00:43:13.534 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.534 "hdgst": false, 00:43:13.534 "ddgst": false, 00:43:13.534 "multipath": "multipath" 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "bdev_nvme_set_hotplug", 00:43:13.534 "params": { 00:43:13.534 "period_us": 100000, 00:43:13.534 "enable": false 00:43:13.534 } 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "method": "bdev_wait_for_examine" 00:43:13.534 } 00:43:13.534 ] 00:43:13.534 }, 00:43:13.534 { 00:43:13.534 "subsystem": "nbd", 00:43:13.534 "config": [] 00:43:13.534 } 00:43:13.534 ] 00:43:13.534 }' 00:43:13.534 12:26:38 keyring_file -- keyring/file.sh@115 -- # killprocess 1697333 00:43:13.534 12:26:38 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1697333 ']' 00:43:13.534 12:26:38 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1697333 00:43:13.534 12:26:38 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:13.534 12:26:38 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:13.534 12:26:38 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1697333 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1697333' 00:43:13.535 killing process with pid 1697333 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@973 -- # kill 1697333 00:43:13.535 Received shutdown signal, test time was about 1.000000 seconds 00:43:13.535 00:43:13.535 Latency(us) 00:43:13.535 [2024-12-05T11:26:38.584Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:13.535 [2024-12-05T11:26:38.584Z] =================================================================================================================== 00:43:13.535 [2024-12-05T11:26:38.584Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@978 -- # wait 1697333 00:43:13.535 12:26:38 keyring_file -- keyring/file.sh@118 -- # bperfpid=1699144 00:43:13.535 12:26:38 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1699144 /var/tmp/bperf.sock 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1699144 ']' 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:13.535 12:26:38 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:13.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:13.535 12:26:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:13.535 12:26:38 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:13.535 "subsystems": [ 00:43:13.535 { 00:43:13.535 "subsystem": "keyring", 00:43:13.535 "config": [ 00:43:13.535 { 00:43:13.535 "method": "keyring_file_add_key", 00:43:13.535 "params": { 00:43:13.535 "name": "key0", 00:43:13.535 "path": "/tmp/tmp.Ltd0OKlApA" 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "keyring_file_add_key", 00:43:13.535 "params": { 00:43:13.535 "name": "key1", 00:43:13.535 "path": "/tmp/tmp.oGKLDVKmbe" 00:43:13.535 } 00:43:13.535 } 00:43:13.535 ] 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "subsystem": "iobuf", 00:43:13.535 "config": [ 00:43:13.535 { 00:43:13.535 "method": "iobuf_set_options", 00:43:13.535 "params": { 00:43:13.535 "small_pool_count": 8192, 00:43:13.535 "large_pool_count": 1024, 00:43:13.535 "small_bufsize": 8192, 00:43:13.535 "large_bufsize": 135168, 00:43:13.535 "enable_numa": false 00:43:13.535 } 00:43:13.535 } 00:43:13.535 ] 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "subsystem": "sock", 00:43:13.535 "config": [ 00:43:13.535 { 00:43:13.535 "method": "sock_set_default_impl", 00:43:13.535 "params": { 00:43:13.535 "impl_name": "posix" 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "sock_impl_set_options", 00:43:13.535 "params": { 00:43:13.535 "impl_name": "ssl", 00:43:13.535 "recv_buf_size": 4096, 00:43:13.535 "send_buf_size": 4096, 00:43:13.535 "enable_recv_pipe": true, 00:43:13.535 "enable_quickack": false, 00:43:13.535 "enable_placement_id": 0, 00:43:13.535 "enable_zerocopy_send_server": true, 00:43:13.535 "enable_zerocopy_send_client": false, 00:43:13.535 "zerocopy_threshold": 0, 00:43:13.535 "tls_version": 0, 00:43:13.535 "enable_ktls": false 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "sock_impl_set_options", 00:43:13.535 "params": { 00:43:13.535 "impl_name": "posix", 00:43:13.535 "recv_buf_size": 2097152, 00:43:13.535 "send_buf_size": 2097152, 00:43:13.535 "enable_recv_pipe": true, 00:43:13.535 "enable_quickack": false, 00:43:13.535 "enable_placement_id": 0, 00:43:13.535 "enable_zerocopy_send_server": true, 00:43:13.535 "enable_zerocopy_send_client": false, 00:43:13.535 "zerocopy_threshold": 0, 00:43:13.535 "tls_version": 0, 00:43:13.535 "enable_ktls": false 00:43:13.535 } 00:43:13.535 } 00:43:13.535 ] 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "subsystem": "vmd", 00:43:13.535 "config": [] 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "subsystem": "accel", 00:43:13.535 "config": [ 00:43:13.535 { 00:43:13.535 "method": "accel_set_options", 00:43:13.535 "params": { 00:43:13.535 "small_cache_size": 128, 00:43:13.535 "large_cache_size": 16, 00:43:13.535 "task_count": 2048, 00:43:13.535 "sequence_count": 2048, 00:43:13.535 "buf_count": 2048 00:43:13.535 } 00:43:13.535 } 00:43:13.535 ] 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "subsystem": "bdev", 00:43:13.535 "config": [ 00:43:13.535 { 00:43:13.535 "method": "bdev_set_options", 00:43:13.535 "params": { 00:43:13.535 "bdev_io_pool_size": 65535, 00:43:13.535 "bdev_io_cache_size": 256, 00:43:13.535 "bdev_auto_examine": true, 00:43:13.535 "iobuf_small_cache_size": 128, 00:43:13.535 "iobuf_large_cache_size": 16 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "bdev_raid_set_options", 00:43:13.535 "params": { 00:43:13.535 "process_window_size_kb": 1024, 00:43:13.535 "process_max_bandwidth_mb_sec": 0 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "bdev_iscsi_set_options", 00:43:13.535 "params": { 00:43:13.535 "timeout_sec": 30 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "bdev_nvme_set_options", 00:43:13.535 "params": { 00:43:13.535 "action_on_timeout": "none", 00:43:13.535 "timeout_us": 0, 00:43:13.535 "timeout_admin_us": 0, 00:43:13.535 "keep_alive_timeout_ms": 10000, 00:43:13.535 "arbitration_burst": 0, 00:43:13.535 "low_priority_weight": 0, 00:43:13.535 "medium_priority_weight": 0, 00:43:13.535 "high_priority_weight": 0, 00:43:13.535 "nvme_adminq_poll_period_us": 10000, 00:43:13.535 "nvme_ioq_poll_period_us": 0, 00:43:13.535 "io_queue_requests": 512, 00:43:13.535 "delay_cmd_submit": true, 00:43:13.535 "transport_retry_count": 4, 00:43:13.535 "bdev_retry_count": 3, 00:43:13.535 "transport_ack_timeout": 0, 00:43:13.535 "ctrlr_loss_timeout_sec": 0, 00:43:13.535 "reconnect_delay_sec": 0, 00:43:13.535 "fast_io_fail_timeout_sec": 0, 00:43:13.535 "disable_auto_failback": false, 00:43:13.535 "generate_uuids": false, 00:43:13.535 "transport_tos": 0, 00:43:13.535 "nvme_error_stat": false, 00:43:13.535 "rdma_srq_size": 0, 00:43:13.535 "io_path_stat": false, 00:43:13.535 "allow_accel_sequence": false, 00:43:13.535 "rdma_max_cq_size": 0, 00:43:13.535 "rdma_cm_event_timeout_ms": 0, 00:43:13.535 "dhchap_digests": [ 00:43:13.535 "sha256", 00:43:13.535 "sha384", 00:43:13.535 "sha512" 00:43:13.535 ], 00:43:13.535 "dhchap_dhgroups": [ 00:43:13.535 "null", 00:43:13.535 "ffdhe2048", 00:43:13.535 "ffdhe3072", 00:43:13.535 "ffdhe4096", 00:43:13.535 "ffdhe6144", 00:43:13.535 "ffdhe8192" 00:43:13.535 ] 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "bdev_nvme_attach_controller", 00:43:13.535 "params": { 00:43:13.535 "name": "nvme0", 00:43:13.535 "trtype": "TCP", 00:43:13.535 "adrfam": "IPv4", 00:43:13.535 "traddr": "127.0.0.1", 00:43:13.535 "trsvcid": "4420", 00:43:13.535 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:13.535 "prchk_reftag": false, 00:43:13.535 "prchk_guard": false, 00:43:13.535 "ctrlr_loss_timeout_sec": 0, 00:43:13.535 "reconnect_delay_sec": 0, 00:43:13.535 "fast_io_fail_timeout_sec": 0, 00:43:13.535 "psk": "key0", 00:43:13.535 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:13.535 "hdgst": false, 00:43:13.535 "ddgst": false, 00:43:13.535 "multipath": "multipath" 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "bdev_nvme_set_hotplug", 00:43:13.535 "params": { 00:43:13.535 "period_us": 100000, 00:43:13.535 "enable": false 00:43:13.535 } 00:43:13.535 }, 00:43:13.535 { 00:43:13.535 "method": "bdev_wait_for_examine" 00:43:13.536 } 00:43:13.536 ] 00:43:13.536 }, 00:43:13.536 { 00:43:13.536 "subsystem": "nbd", 00:43:13.536 "config": [] 00:43:13.536 } 00:43:13.536 ] 00:43:13.536 }' 00:43:13.536 [2024-12-05 12:26:38.544013] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:13.536 [2024-12-05 12:26:38.544067] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699144 ] 00:43:13.795 [2024-12-05 12:26:38.625130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:13.796 [2024-12-05 12:26:38.654547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:13.796 [2024-12-05 12:26:38.798400] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:14.366 12:26:39 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:14.366 12:26:39 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:14.366 12:26:39 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:14.366 12:26:39 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:14.366 12:26:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.627 12:26:39 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:14.627 12:26:39 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:14.627 12:26:39 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:14.627 12:26:39 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:14.627 12:26:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:14.887 12:26:39 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:14.887 12:26:39 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:14.887 12:26:39 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:14.887 12:26:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:15.147 12:26:39 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:15.147 12:26:39 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:15.147 12:26:39 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Ltd0OKlApA /tmp/tmp.oGKLDVKmbe 00:43:15.147 12:26:40 keyring_file -- keyring/file.sh@20 -- # killprocess 1699144 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1699144 ']' 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1699144 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1699144 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1699144' 00:43:15.147 killing process with pid 1699144 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@973 -- # kill 1699144 00:43:15.147 Received shutdown signal, test time was about 1.000000 seconds 00:43:15.147 00:43:15.147 Latency(us) 00:43:15.147 [2024-12-05T11:26:40.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:15.147 [2024-12-05T11:26:40.196Z] =================================================================================================================== 00:43:15.147 [2024-12-05T11:26:40.196Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@978 -- # wait 1699144 00:43:15.147 12:26:40 keyring_file -- keyring/file.sh@21 -- # killprocess 1697020 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1697020 ']' 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1697020 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:15.147 12:26:40 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1697020 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1697020' 00:43:15.407 killing process with pid 1697020 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@973 -- # kill 1697020 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@978 -- # wait 1697020 00:43:15.407 00:43:15.407 real 0m11.965s 00:43:15.407 user 0m28.975s 00:43:15.407 sys 0m2.707s 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:15.407 12:26:40 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:15.407 ************************************ 00:43:15.407 END TEST keyring_file 00:43:15.407 ************************************ 00:43:15.667 12:26:40 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:15.667 12:26:40 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:15.667 12:26:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:15.667 12:26:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:15.667 12:26:40 -- common/autotest_common.sh@10 -- # set +x 00:43:15.667 ************************************ 00:43:15.667 START TEST keyring_linux 00:43:15.667 ************************************ 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:15.667 Joined session keyring: 155853288 00:43:15.667 * Looking for test storage... 00:43:15.667 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:15.667 12:26:40 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:15.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.667 --rc genhtml_branch_coverage=1 00:43:15.667 --rc genhtml_function_coverage=1 00:43:15.667 --rc genhtml_legend=1 00:43:15.667 --rc geninfo_all_blocks=1 00:43:15.667 --rc geninfo_unexecuted_blocks=1 00:43:15.667 00:43:15.667 ' 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:15.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.667 --rc genhtml_branch_coverage=1 00:43:15.667 --rc genhtml_function_coverage=1 00:43:15.667 --rc genhtml_legend=1 00:43:15.667 --rc geninfo_all_blocks=1 00:43:15.667 --rc geninfo_unexecuted_blocks=1 00:43:15.667 00:43:15.667 ' 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:15.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.667 --rc genhtml_branch_coverage=1 00:43:15.667 --rc genhtml_function_coverage=1 00:43:15.667 --rc genhtml_legend=1 00:43:15.667 --rc geninfo_all_blocks=1 00:43:15.667 --rc geninfo_unexecuted_blocks=1 00:43:15.667 00:43:15.667 ' 00:43:15.667 12:26:40 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:15.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:15.667 --rc genhtml_branch_coverage=1 00:43:15.667 --rc genhtml_function_coverage=1 00:43:15.667 --rc genhtml_legend=1 00:43:15.667 --rc geninfo_all_blocks=1 00:43:15.667 --rc geninfo_unexecuted_blocks=1 00:43:15.667 00:43:15.667 ' 00:43:15.667 12:26:40 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:15.667 12:26:40 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:15.667 12:26:40 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:15.928 12:26:40 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:15.928 12:26:40 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:15.928 12:26:40 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:15.928 12:26:40 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:15.928 12:26:40 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.928 12:26:40 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.928 12:26:40 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.928 12:26:40 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:15.928 12:26:40 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:43:15.928 12:26:40 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:43:15.928 12:26:40 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:43:15.928 12:26:40 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:43:15.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@507 -- # python - 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:15.928 /tmp/:spdk-test:key0 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:43:15.928 12:26:40 keyring_linux -- nvmf/common.sh@507 -- # python - 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:15.928 12:26:40 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:15.928 /tmp/:spdk-test:key1 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1699581 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1699581 00:43:15.928 12:26:40 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:15.928 12:26:40 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1699581 ']' 00:43:15.928 12:26:40 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:15.928 12:26:40 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:15.928 12:26:40 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:15.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:15.928 12:26:40 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:15.928 12:26:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:15.928 [2024-12-05 12:26:40.912535] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:15.928 [2024-12-05 12:26:40.912609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699581 ] 00:43:16.189 [2024-12-05 12:26:40.999372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:16.189 [2024-12-05 12:26:41.035074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:16.761 12:26:41 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:16.761 [2024-12-05 12:26:41.694351] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:16.761 null0 00:43:16.761 [2024-12-05 12:26:41.726405] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:16.761 [2024-12-05 12:26:41.726745] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:16.761 12:26:41 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:16.761 656942754 00:43:16.761 12:26:41 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:16.761 14973004 00:43:16.761 12:26:41 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1699769 00:43:16.761 12:26:41 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1699769 /var/tmp/bperf.sock 00:43:16.761 12:26:41 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1699769 ']' 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:16.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:16.761 12:26:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:16.761 [2024-12-05 12:26:41.805802] Starting SPDK v25.01-pre git sha1 688351e0e / DPDK 24.03.0 initialization... 00:43:16.761 [2024-12-05 12:26:41.805851] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1699769 ] 00:43:17.022 [2024-12-05 12:26:41.887886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:17.022 [2024-12-05 12:26:41.917827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:17.592 12:26:42 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:17.592 12:26:42 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:17.592 12:26:42 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:17.592 12:26:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:17.852 12:26:42 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:17.852 12:26:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:18.112 12:26:42 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:18.112 12:26:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:18.112 [2024-12-05 12:26:43.130919] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:18.372 nvme0n1 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:18.372 12:26:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:18.372 12:26:43 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:18.372 12:26:43 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.372 12:26:43 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.372 12:26:43 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@25 -- # sn=656942754 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@26 -- # [[ 656942754 == \6\5\6\9\4\2\7\5\4 ]] 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 656942754 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:18.633 12:26:43 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:18.633 Running I/O for 1 seconds... 00:43:20.016 24506.00 IOPS, 95.73 MiB/s 00:43:20.016 Latency(us) 00:43:20.016 [2024-12-05T11:26:45.065Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.016 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:20.016 nvme0n1 : 1.01 24507.37 95.73 0.00 0.00 5207.25 3986.77 11468.80 00:43:20.016 [2024-12-05T11:26:45.065Z] =================================================================================================================== 00:43:20.016 [2024-12-05T11:26:45.065Z] Total : 24507.37 95.73 0.00 0.00 5207.25 3986.77 11468.80 00:43:20.016 { 00:43:20.016 "results": [ 00:43:20.016 { 00:43:20.016 "job": "nvme0n1", 00:43:20.016 "core_mask": "0x2", 00:43:20.016 "workload": "randread", 00:43:20.016 "status": "finished", 00:43:20.016 "queue_depth": 128, 00:43:20.016 "io_size": 4096, 00:43:20.016 "runtime": 1.005208, 00:43:20.016 "iops": 24507.36563974819, 00:43:20.016 "mibps": 95.73189703026637, 00:43:20.016 "io_failed": 0, 00:43:20.016 "io_timeout": 0, 00:43:20.016 "avg_latency_us": 5207.246102428794, 00:43:20.016 "min_latency_us": 3986.7733333333335, 00:43:20.016 "max_latency_us": 11468.8 00:43:20.016 } 00:43:20.016 ], 00:43:20.016 "core_count": 1 00:43:20.016 } 00:43:20.016 12:26:44 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:20.016 12:26:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:20.016 12:26:44 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:20.016 12:26:44 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:20.016 12:26:44 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:20.016 12:26:44 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:20.016 12:26:44 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:20.016 12:26:44 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.276 12:26:45 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:20.276 12:26:45 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:20.276 12:26:45 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:20.276 12:26:45 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:20.276 12:26:45 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.276 12:26:45 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:20.276 [2024-12-05 12:26:45.245858] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:20.276 [2024-12-05 12:26:45.246708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3c9e0 (107): Transport endpoint is not connected 00:43:20.276 [2024-12-05 12:26:45.247704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc3c9e0 (9): Bad file descriptor 00:43:20.276 [2024-12-05 12:26:45.248707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:20.276 [2024-12-05 12:26:45.248715] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:20.276 [2024-12-05 12:26:45.248720] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:20.276 [2024-12-05 12:26:45.248727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:20.276 request: 00:43:20.276 { 00:43:20.276 "name": "nvme0", 00:43:20.276 "trtype": "tcp", 00:43:20.276 "traddr": "127.0.0.1", 00:43:20.276 "adrfam": "ipv4", 00:43:20.276 "trsvcid": "4420", 00:43:20.276 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:20.276 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:20.276 "prchk_reftag": false, 00:43:20.276 "prchk_guard": false, 00:43:20.276 "hdgst": false, 00:43:20.276 "ddgst": false, 00:43:20.277 "psk": ":spdk-test:key1", 00:43:20.277 "allow_unrecognized_csi": false, 00:43:20.277 "method": "bdev_nvme_attach_controller", 00:43:20.277 "req_id": 1 00:43:20.277 } 00:43:20.277 Got JSON-RPC error response 00:43:20.277 response: 00:43:20.277 { 00:43:20.277 "code": -5, 00:43:20.277 "message": "Input/output error" 00:43:20.277 } 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@33 -- # sn=656942754 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 656942754 00:43:20.277 1 links removed 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@33 -- # sn=14973004 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 14973004 00:43:20.277 1 links removed 00:43:20.277 12:26:45 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1699769 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1699769 ']' 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1699769 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.277 12:26:45 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1699769 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1699769' 00:43:20.537 killing process with pid 1699769 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@973 -- # kill 1699769 00:43:20.537 Received shutdown signal, test time was about 1.000000 seconds 00:43:20.537 00:43:20.537 Latency(us) 00:43:20.537 [2024-12-05T11:26:45.586Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:20.537 [2024-12-05T11:26:45.586Z] =================================================================================================================== 00:43:20.537 [2024-12-05T11:26:45.586Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@978 -- # wait 1699769 00:43:20.537 12:26:45 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1699581 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1699581 ']' 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1699581 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1699581 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1699581' 00:43:20.537 killing process with pid 1699581 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@973 -- # kill 1699581 00:43:20.537 12:26:45 keyring_linux -- common/autotest_common.sh@978 -- # wait 1699581 00:43:20.796 00:43:20.796 real 0m5.193s 00:43:20.796 user 0m9.655s 00:43:20.796 sys 0m1.455s 00:43:20.796 12:26:45 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:20.796 12:26:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:20.796 ************************************ 00:43:20.796 END TEST keyring_linux 00:43:20.796 ************************************ 00:43:20.796 12:26:45 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:20.796 12:26:45 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:20.796 12:26:45 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:20.796 12:26:45 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:20.796 12:26:45 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:20.796 12:26:45 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:20.796 12:26:45 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:20.796 12:26:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:20.796 12:26:45 -- common/autotest_common.sh@10 -- # set +x 00:43:20.797 12:26:45 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:20.797 12:26:45 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:20.797 12:26:45 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:20.797 12:26:45 -- common/autotest_common.sh@10 -- # set +x 00:43:28.931 INFO: APP EXITING 00:43:28.931 INFO: killing all VMs 00:43:28.931 INFO: killing vhost app 00:43:28.931 INFO: EXIT DONE 00:43:32.225 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:65:00.0 (144d a80a): Already using the nvme driver 00:43:32.225 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:43:32.225 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:43:35.520 Cleaning 00:43:35.520 Removing: /var/run/dpdk/spdk0/config 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:35.520 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:35.520 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:35.520 Removing: /var/run/dpdk/spdk1/config 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:35.520 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:35.520 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:35.520 Removing: /var/run/dpdk/spdk2/config 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:35.520 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:35.520 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:35.520 Removing: /var/run/dpdk/spdk3/config 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:35.520 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:35.520 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:35.520 Removing: /var/run/dpdk/spdk4/config 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:35.520 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:35.779 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:35.779 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:35.779 Removing: /dev/shm/bdev_svc_trace.1 00:43:35.779 Removing: /dev/shm/nvmf_trace.0 00:43:35.779 Removing: /dev/shm/spdk_tgt_trace.pid1120893 00:43:35.779 Removing: /var/run/dpdk/spdk0 00:43:35.779 Removing: /var/run/dpdk/spdk1 00:43:35.779 Removing: /var/run/dpdk/spdk2 00:43:35.779 Removing: /var/run/dpdk/spdk3 00:43:35.779 Removing: /var/run/dpdk/spdk4 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1119166 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1120893 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1121472 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1122630 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1122841 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1124095 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1124237 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1124693 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1125772 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1126307 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1126699 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1127097 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1127513 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1127914 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1128267 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1128441 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1128711 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1130078 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1133350 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1133709 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1134075 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1134260 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1134786 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1134798 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1135412 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1135505 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1135871 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1136040 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1136252 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1136490 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1137033 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1137279 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1137561 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1142337 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1147855 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1160429 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1161114 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1166414 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1166895 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1171985 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1179121 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1182511 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1195100 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1206754 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1208774 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1209889 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1231036 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1235959 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1292642 00:43:35.779 Removing: /var/run/dpdk/spdk_pid1299177 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1306247 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1314735 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1314743 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1315752 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1316766 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1317800 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1318432 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1318539 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1318770 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1319010 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1319095 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1320103 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1321109 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1322114 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1322783 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1322786 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1323124 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1324563 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1325776 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1335681 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1369493 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1374923 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1376922 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1379065 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1379269 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1379584 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1379628 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1380355 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1382695 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1383803 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1384497 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1387200 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1387913 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1388662 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1394106 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1401006 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1401007 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1401008 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1405744 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1415990 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1420847 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1428159 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1429769 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1431478 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1433332 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1438840 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1444221 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1449858 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1459029 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1459154 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1464388 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1464573 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1464738 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1465352 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1465405 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1470892 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1471635 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1477087 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1480213 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1486916 00:43:36.039 Removing: /var/run/dpdk/spdk_pid1493478 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1504306 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1512743 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1512791 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1535622 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1536448 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1537137 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1537812 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1538870 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1539560 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1540239 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1540831 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1545921 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1546165 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1553487 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1553862 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1560810 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1565860 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1577575 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1578245 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1583399 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1583830 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1588924 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1595812 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1598893 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1611655 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1622395 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1624375 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1625410 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1645363 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1650104 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1653283 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1661644 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1661650 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1667546 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1669975 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1672265 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1673591 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1675976 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1677504 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1687445 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1687924 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1688508 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1691429 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1692056 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1692478 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1697020 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1697333 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1699144 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1699581 00:43:36.299 Removing: /var/run/dpdk/spdk_pid1699769 00:43:36.299 Clean 00:43:36.559 12:27:01 -- common/autotest_common.sh@1453 -- # return 0 00:43:36.559 12:27:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:36.559 12:27:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:36.559 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:43:36.559 12:27:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:36.559 12:27:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:36.559 12:27:01 -- common/autotest_common.sh@10 -- # set +x 00:43:36.559 12:27:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:36.559 12:27:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:36.559 12:27:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:36.559 12:27:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:36.559 12:27:01 -- spdk/autotest.sh@398 -- # hostname 00:43:36.559 12:27:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:36.877 geninfo: WARNING: invalid characters removed from testname! 00:44:03.524 12:27:27 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:05.426 12:27:30 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:07.332 12:27:32 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:09.242 12:27:33 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:11.153 12:27:36 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:13.062 12:27:37 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:14.445 12:27:39 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:14.445 12:27:39 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:14.445 12:27:39 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:14.445 12:27:39 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:14.445 12:27:39 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:14.445 12:27:39 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:14.445 + [[ -n 1034030 ]] 00:44:14.445 + sudo kill 1034030 00:44:14.717 [Pipeline] } 00:44:14.736 [Pipeline] // stage 00:44:14.741 [Pipeline] } 00:44:14.757 [Pipeline] // timeout 00:44:14.763 [Pipeline] } 00:44:14.779 [Pipeline] // catchError 00:44:14.785 [Pipeline] } 00:44:14.800 [Pipeline] // wrap 00:44:14.806 [Pipeline] } 00:44:14.818 [Pipeline] // catchError 00:44:14.827 [Pipeline] stage 00:44:14.829 [Pipeline] { (Epilogue) 00:44:14.842 [Pipeline] catchError 00:44:14.844 [Pipeline] { 00:44:14.857 [Pipeline] echo 00:44:14.859 Cleanup processes 00:44:14.865 [Pipeline] sh 00:44:15.156 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:15.156 1713160 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:15.171 [Pipeline] sh 00:44:15.461 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:15.461 ++ grep -v 'sudo pgrep' 00:44:15.461 ++ awk '{print $1}' 00:44:15.461 + sudo kill -9 00:44:15.461 + true 00:44:15.476 [Pipeline] sh 00:44:15.766 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:28.011 [Pipeline] sh 00:44:28.301 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:28.301 Artifacts sizes are good 00:44:28.319 [Pipeline] archiveArtifacts 00:44:28.328 Archiving artifacts 00:44:28.460 [Pipeline] sh 00:44:28.773 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:28.789 [Pipeline] cleanWs 00:44:28.800 [WS-CLEANUP] Deleting project workspace... 00:44:28.800 [WS-CLEANUP] Deferred wipeout is used... 00:44:28.807 [WS-CLEANUP] done 00:44:28.809 [Pipeline] } 00:44:28.827 [Pipeline] // catchError 00:44:28.839 [Pipeline] sh 00:44:29.200 + logger -p user.info -t JENKINS-CI 00:44:29.211 [Pipeline] } 00:44:29.224 [Pipeline] // stage 00:44:29.230 [Pipeline] } 00:44:29.244 [Pipeline] // node 00:44:29.249 [Pipeline] End of Pipeline 00:44:29.277 Finished: SUCCESS